2025-05-31 15:25:56.613051 | Job console starting 2025-05-31 15:25:56.630025 | Updating git repos 2025-05-31 15:25:56.727614 | Cloning repos into workspace 2025-05-31 15:25:56.942733 | Restoring repo states 2025-05-31 15:25:56.977030 | Merging changes 2025-05-31 15:25:56.977053 | Checking out repos 2025-05-31 15:25:57.468082 | Preparing playbooks 2025-05-31 15:25:58.522219 | Running Ansible setup 2025-05-31 15:26:03.872862 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-05-31 15:26:04.757742 | 2025-05-31 15:26:04.757947 | PLAY [Base pre] 2025-05-31 15:26:04.777401 | 2025-05-31 15:26:04.777595 | TASK [Setup log path fact] 2025-05-31 15:26:04.808712 | orchestrator | ok 2025-05-31 15:26:04.827090 | 2025-05-31 15:26:04.827271 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-31 15:26:04.872028 | orchestrator | ok 2025-05-31 15:26:04.886185 | 2025-05-31 15:26:04.886313 | TASK [emit-job-header : Print job information] 2025-05-31 15:26:04.934912 | # Job Information 2025-05-31 15:26:04.935346 | Ansible Version: 2.16.14 2025-05-31 15:26:04.935419 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-05-31 15:26:04.935491 | Pipeline: post 2025-05-31 15:26:04.935536 | Executor: 521e9411259a 2025-05-31 15:26:04.935633 | Triggered by: https://github.com/osism/testbed/commit/68356aa773a8e9a33595cdb28094451ccfe80616 2025-05-31 15:26:04.935676 | Event ID: 87865ae8-3e33-11f0-9145-537ddcdc4d1c 2025-05-31 15:26:04.945862 | 2025-05-31 15:26:04.945998 | LOOP [emit-job-header : Print node information] 2025-05-31 15:26:05.072179 | orchestrator | ok: 2025-05-31 15:26:05.072420 | orchestrator | # Node Information 2025-05-31 15:26:05.072455 | orchestrator | Inventory Hostname: orchestrator 2025-05-31 15:26:05.072480 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-05-31 15:26:05.072519 | orchestrator | Username: zuul-testbed02 2025-05-31 15:26:05.072565 | orchestrator | Distro: Debian 12.11 2025-05-31 15:26:05.072616 | orchestrator | Provider: static-testbed 2025-05-31 15:26:05.072650 | orchestrator | Region: 2025-05-31 15:26:05.072680 | orchestrator | Label: testbed-orchestrator 2025-05-31 15:26:05.072708 | orchestrator | Product Name: OpenStack Nova 2025-05-31 15:26:05.072738 | orchestrator | Interface IP: 81.163.193.140 2025-05-31 15:26:05.085769 | 2025-05-31 15:26:05.085941 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-05-31 15:26:05.596944 | orchestrator -> localhost | changed 2025-05-31 15:26:05.614371 | 2025-05-31 15:26:05.614582 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-05-31 15:26:06.746226 | orchestrator -> localhost | changed 2025-05-31 15:26:06.764129 | 2025-05-31 15:26:06.764276 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-05-31 15:26:07.138198 | orchestrator -> localhost | ok 2025-05-31 15:26:07.155754 | 2025-05-31 15:26:07.155990 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-05-31 15:26:07.195503 | orchestrator | ok 2025-05-31 15:26:07.215584 | orchestrator | included: /var/lib/zuul/builds/6e00cdb8ce344775ac0ff3220588f416/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-05-31 15:26:07.224494 | 2025-05-31 15:26:07.224666 | TASK [add-build-sshkey : Create Temp SSH key] 2025-05-31 15:26:08.751754 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-05-31 15:26:08.752300 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/6e00cdb8ce344775ac0ff3220588f416/work/6e00cdb8ce344775ac0ff3220588f416_id_rsa 2025-05-31 15:26:08.752415 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/6e00cdb8ce344775ac0ff3220588f416/work/6e00cdb8ce344775ac0ff3220588f416_id_rsa.pub 2025-05-31 15:26:08.752491 | orchestrator -> localhost | The key fingerprint is: 2025-05-31 15:26:08.752621 | orchestrator -> localhost | SHA256:24UJXIf+1DcbuQQFmshhFJDgjPnwwo884VxPlf3f5nk zuul-build-sshkey 2025-05-31 15:26:08.752693 | orchestrator -> localhost | The key's randomart image is: 2025-05-31 15:26:08.752784 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-05-31 15:26:08.752847 | orchestrator -> localhost | | ...+=o...o. | 2025-05-31 15:26:08.752908 | orchestrator -> localhost | | = oo.*.o. | 2025-05-31 15:26:08.752966 | orchestrator -> localhost | | + o o* + .. .| 2025-05-31 15:26:08.753022 | orchestrator -> localhost | | . + ...oo .=.| 2025-05-31 15:26:08.753078 | orchestrator -> localhost | | + + .S oo....=| 2025-05-31 15:26:08.753139 | orchestrator -> localhost | | + * o o .. .o.| 2025-05-31 15:26:08.753196 | orchestrator -> localhost | | * . .. . .o| 2025-05-31 15:26:08.753252 | orchestrator -> localhost | | . oE| 2025-05-31 15:26:08.754440 | orchestrator -> localhost | | .o| 2025-05-31 15:26:08.754710 | orchestrator -> localhost | +----[SHA256]-----+ 2025-05-31 15:26:08.754986 | orchestrator -> localhost | ok: Runtime: 0:00:00.969137 2025-05-31 15:26:08.769245 | 2025-05-31 15:26:08.769390 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-05-31 15:26:08.810925 | orchestrator | ok 2025-05-31 15:26:08.826748 | orchestrator | included: /var/lib/zuul/builds/6e00cdb8ce344775ac0ff3220588f416/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-05-31 15:26:08.853408 | 2025-05-31 15:26:08.854588 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-05-31 15:26:08.884953 | orchestrator | skipping: Conditional result was False 2025-05-31 15:26:08.902612 | 2025-05-31 15:26:08.902792 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-05-31 15:26:09.566685 | orchestrator | changed 2025-05-31 15:26:09.573510 | 2025-05-31 15:26:09.573627 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-05-31 15:26:09.847885 | orchestrator | ok 2025-05-31 15:26:09.854470 | 2025-05-31 15:26:09.854623 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-05-31 15:26:10.345622 | orchestrator | ok 2025-05-31 15:26:10.355248 | 2025-05-31 15:26:10.355388 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-05-31 15:26:10.778211 | orchestrator | ok 2025-05-31 15:26:10.786781 | 2025-05-31 15:26:10.786949 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-05-31 15:26:10.812831 | orchestrator | skipping: Conditional result was False 2025-05-31 15:26:10.819956 | 2025-05-31 15:26:10.820073 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-05-31 15:26:11.321773 | orchestrator -> localhost | changed 2025-05-31 15:26:11.336464 | 2025-05-31 15:26:11.336648 | TASK [add-build-sshkey : Add back temp key] 2025-05-31 15:26:11.766796 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/6e00cdb8ce344775ac0ff3220588f416/work/6e00cdb8ce344775ac0ff3220588f416_id_rsa (zuul-build-sshkey) 2025-05-31 15:26:11.767204 | orchestrator -> localhost | ok: Runtime: 0:00:00.017639 2025-05-31 15:26:11.775039 | 2025-05-31 15:26:11.775152 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-05-31 15:26:12.216215 | orchestrator | ok 2025-05-31 15:26:12.223748 | 2025-05-31 15:26:12.223877 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-05-31 15:26:12.259089 | orchestrator | skipping: Conditional result was False 2025-05-31 15:26:12.322822 | 2025-05-31 15:26:12.322994 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-05-31 15:26:12.738112 | orchestrator | ok 2025-05-31 15:26:12.754272 | 2025-05-31 15:26:12.754400 | TASK [validate-host : Define zuul_info_dir fact] 2025-05-31 15:26:12.784719 | orchestrator | ok 2025-05-31 15:26:12.792165 | 2025-05-31 15:26:12.792268 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-05-31 15:26:13.169205 | orchestrator -> localhost | ok 2025-05-31 15:26:13.184867 | 2025-05-31 15:26:13.185046 | TASK [validate-host : Collect information about the host] 2025-05-31 15:26:14.406205 | orchestrator | ok 2025-05-31 15:26:14.421012 | 2025-05-31 15:26:14.421130 | TASK [validate-host : Sanitize hostname] 2025-05-31 15:26:14.497637 | orchestrator | ok 2025-05-31 15:26:14.506283 | 2025-05-31 15:26:14.506428 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-05-31 15:26:15.143630 | orchestrator -> localhost | changed 2025-05-31 15:26:15.158823 | 2025-05-31 15:26:15.159171 | TASK [validate-host : Collect information about zuul worker] 2025-05-31 15:26:15.635098 | orchestrator | ok 2025-05-31 15:26:15.644086 | 2025-05-31 15:26:15.644224 | TASK [validate-host : Write out all zuul information for each host] 2025-05-31 15:26:16.345306 | orchestrator -> localhost | changed 2025-05-31 15:26:16.357305 | 2025-05-31 15:26:16.357443 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-05-31 15:26:16.659879 | orchestrator | ok 2025-05-31 15:26:16.669816 | 2025-05-31 15:26:16.669959 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-05-31 15:26:57.956883 | orchestrator | changed: 2025-05-31 15:26:57.957139 | orchestrator | .d..t...... src/ 2025-05-31 15:26:57.957176 | orchestrator | .d..t...... src/github.com/ 2025-05-31 15:26:57.957201 | orchestrator | .d..t...... src/github.com/osism/ 2025-05-31 15:26:57.957223 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-05-31 15:26:57.957244 | orchestrator | RedHat.yml 2025-05-31 15:26:57.968116 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-05-31 15:26:57.968134 | orchestrator | RedHat.yml 2025-05-31 15:26:57.968188 | orchestrator | = 1.53.0"... 2025-05-31 15:27:09.849625 | orchestrator | 15:27:09.849 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-05-31 15:27:10.863192 | orchestrator | 15:27:10.862 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-05-31 15:27:11.735181 | orchestrator | 15:27:11.734 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-05-31 15:27:12.979062 | orchestrator | 15:27:12.978 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.1.0... 2025-05-31 15:27:14.169605 | orchestrator | 15:27:14.169 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.1.0 (signed, key ID 4F80527A391BEFD2) 2025-05-31 15:27:15.144691 | orchestrator | 15:27:15.144 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-05-31 15:27:15.982561 | orchestrator | 15:27:15.982 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-05-31 15:27:15.982633 | orchestrator | 15:27:15.982 STDOUT terraform: Providers are signed by their developers. 2025-05-31 15:27:15.982640 | orchestrator | 15:27:15.982 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-05-31 15:27:15.982694 | orchestrator | 15:27:15.982 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-05-31 15:27:15.982774 | orchestrator | 15:27:15.982 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-05-31 15:27:15.982883 | orchestrator | 15:27:15.982 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-05-31 15:27:15.982980 | orchestrator | 15:27:15.982 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-05-31 15:27:15.983065 | orchestrator | 15:27:15.982 STDOUT terraform: you run "tofu init" in the future. 2025-05-31 15:27:15.983299 | orchestrator | 15:27:15.983 STDOUT terraform: OpenTofu has been successfully initialized! 2025-05-31 15:27:15.983346 | orchestrator | 15:27:15.983 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-05-31 15:27:15.983441 | orchestrator | 15:27:15.983 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-05-31 15:27:15.983469 | orchestrator | 15:27:15.983 STDOUT terraform: should now work. 2025-05-31 15:27:15.983582 | orchestrator | 15:27:15.983 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-05-31 15:27:15.983661 | orchestrator | 15:27:15.983 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-05-31 15:27:15.983742 | orchestrator | 15:27:15.983 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-05-31 15:27:16.177294 | orchestrator | 15:27:16.177 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-05-31 15:27:16.370088 | orchestrator | 15:27:16.369 STDOUT terraform: Created and switched to workspace "ci"! 2025-05-31 15:27:16.372030 | orchestrator | 15:27:16.370 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-05-31 15:27:16.372045 | orchestrator | 15:27:16.370 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-05-31 15:27:16.372051 | orchestrator | 15:27:16.370 STDOUT terraform: for this configuration. 2025-05-31 15:27:16.586158 | orchestrator | 15:27:16.585 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-05-31 15:27:16.719643 | orchestrator | 15:27:16.719 STDOUT terraform: ci.auto.tfvars 2025-05-31 15:27:16.728883 | orchestrator | 15:27:16.728 STDOUT terraform: default_custom.tf 2025-05-31 15:27:16.928743 | orchestrator | 15:27:16.928 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-05-31 15:27:17.814283 | orchestrator | 15:27:17.813 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-05-31 15:27:18.328141 | orchestrator | 15:27:18.327 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-05-31 15:27:18.614269 | orchestrator | 15:27:18.614 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-05-31 15:27:18.614331 | orchestrator | 15:27:18.614 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-05-31 15:27:18.614337 | orchestrator | 15:27:18.614 STDOUT terraform:  + create 2025-05-31 15:27:18.614343 | orchestrator | 15:27:18.614 STDOUT terraform:  <= read (data resources) 2025-05-31 15:27:18.614348 | orchestrator | 15:27:18.614 STDOUT terraform: OpenTofu will perform the following actions: 2025-05-31 15:27:18.614376 | orchestrator | 15:27:18.614 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-05-31 15:27:18.614403 | orchestrator | 15:27:18.614 STDOUT terraform:  # (config refers to values not yet known) 2025-05-31 15:27:18.614435 | orchestrator | 15:27:18.614 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-05-31 15:27:18.614464 | orchestrator | 15:27:18.614 STDOUT terraform:  + checksum = (known after apply) 2025-05-31 15:27:18.614493 | orchestrator | 15:27:18.614 STDOUT terraform:  + created_at = (known after apply) 2025-05-31 15:27:18.614522 | orchestrator | 15:27:18.614 STDOUT terraform:  + file = (known after apply) 2025-05-31 15:27:18.614549 | orchestrator | 15:27:18.614 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.614578 | orchestrator | 15:27:18.614 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 15:27:18.614609 | orchestrator | 15:27:18.614 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-31 15:27:18.614637 | orchestrator | 15:27:18.614 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-31 15:27:18.614656 | orchestrator | 15:27:18.614 STDOUT terraform:  + most_recent = true 2025-05-31 15:27:18.614685 | orchestrator | 15:27:18.614 STDOUT terraform:  + name = (known after apply) 2025-05-31 15:27:18.614713 | orchestrator | 15:27:18.614 STDOUT terraform:  + protected = (known after apply) 2025-05-31 15:27:18.614742 | orchestrator | 15:27:18.614 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.614771 | orchestrator | 15:27:18.614 STDOUT terraform:  + schema = (known after apply) 2025-05-31 15:27:18.614802 | orchestrator | 15:27:18.614 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-31 15:27:18.614832 | orchestrator | 15:27:18.614 STDOUT terraform:  + tags = (known after apply) 2025-05-31 15:27:18.614856 | orchestrator | 15:27:18.614 STDOUT terraform:  + updated_at = (known after apply) 2025-05-31 15:27:18.614871 | orchestrator | 15:27:18.614 STDOUT terraform:  } 2025-05-31 15:27:18.614921 | orchestrator | 15:27:18.614 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-05-31 15:27:18.614955 | orchestrator | 15:27:18.614 STDOUT terraform:  # (config refers to values not yet known) 2025-05-31 15:27:18.614991 | orchestrator | 15:27:18.614 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-05-31 15:27:18.615030 | orchestrator | 15:27:18.614 STDOUT terraform:  + checksum = (known after apply) 2025-05-31 15:27:18.615058 | orchestrator | 15:27:18.615 STDOUT terraform:  + created_at = (known after apply) 2025-05-31 15:27:18.615089 | orchestrator | 15:27:18.615 STDOUT terraform:  + file = (known after apply) 2025-05-31 15:27:18.615127 | orchestrator | 15:27:18.615 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.615147 | orchestrator | 15:27:18.615 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 15:27:18.615176 | orchestrator | 15:27:18.615 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-31 15:27:18.615206 | orchestrator | 15:27:18.615 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-31 15:27:18.615226 | orchestrator | 15:27:18.615 STDOUT terraform:  + most_recent = true 2025-05-31 15:27:18.615255 | orchestrator | 15:27:18.615 STDOUT terraform:  + name = (known after apply) 2025-05-31 15:27:18.615294 | orchestrator | 15:27:18.615 STDOUT terraform:  + protected = (known after apply) 2025-05-31 15:27:18.615314 | orchestrator | 15:27:18.615 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.615348 | orchestrator | 15:27:18.615 STDOUT terraform:  + schema = (known after apply) 2025-05-31 15:27:18.615380 | orchestrator | 15:27:18.615 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-31 15:27:18.615407 | orchestrator | 15:27:18.615 STDOUT terraform:  + tags = (known after apply) 2025-05-31 15:27:18.615436 | orchestrator | 15:27:18.615 STDOUT terraform:  + updated_at = (known after apply) 2025-05-31 15:27:18.615450 | orchestrator | 15:27:18.615 STDOUT terraform:  } 2025-05-31 15:27:18.615479 | orchestrator | 15:27:18.615 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-05-31 15:27:18.615510 | orchestrator | 15:27:18.615 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-05-31 15:27:18.615546 | orchestrator | 15:27:18.615 STDOUT terraform:  + content = (known after apply) 2025-05-31 15:27:18.615581 | orchestrator | 15:27:18.615 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-31 15:27:18.615624 | orchestrator | 15:27:18.615 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-31 15:27:18.615652 | orchestrator | 15:27:18.615 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-31 15:27:18.615688 | orchestrator | 15:27:18.615 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-31 15:27:18.615725 | orchestrator | 15:27:18.615 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-31 15:27:18.615759 | orchestrator | 15:27:18.615 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-31 15:27:18.615791 | orchestrator | 15:27:18.615 STDOUT terraform:  + directory_permission = "0777" 2025-05-31 15:27:18.615810 | orchestrator | 15:27:18.615 STDOUT terraform:  + file_permission = "0644" 2025-05-31 15:27:18.615844 | orchestrator | 15:27:18.615 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-05-31 15:27:18.615879 | orchestrator | 15:27:18.615 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.615886 | orchestrator | 15:27:18.615 STDOUT terraform:  } 2025-05-31 15:27:18.615915 | orchestrator | 15:27:18.615 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-05-31 15:27:18.615939 | orchestrator | 15:27:18.615 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-05-31 15:27:18.615974 | orchestrator | 15:27:18.615 STDOUT terraform:  + content = (known after apply) 2025-05-31 15:27:18.616033 | orchestrator | 15:27:18.615 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-31 15:27:18.616069 | orchestrator | 15:27:18.616 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-31 15:27:18.616104 | orchestrator | 15:27:18.616 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-31 15:27:18.616147 | orchestrator | 15:27:18.616 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-31 15:27:18.616175 | orchestrator | 15:27:18.616 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-31 15:27:18.616209 | orchestrator | 15:27:18.616 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-31 15:27:18.616231 | orchestrator | 15:27:18.616 STDOUT terraform:  + directory_permission = "0777" 2025-05-31 15:27:18.616257 | orchestrator | 15:27:18.616 STDOUT terraform:  + file_permission = "0644" 2025-05-31 15:27:18.616288 | orchestrator | 15:27:18.616 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-05-31 15:27:18.616323 | orchestrator | 15:27:18.616 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.616331 | orchestrator | 15:27:18.616 STDOUT terraform:  } 2025-05-31 15:27:18.616360 | orchestrator | 15:27:18.616 STDOUT terraform:  # local_file.inventory will be created 2025-05-31 15:27:18.616386 | orchestrator | 15:27:18.616 STDOUT terraform:  + resource "local_file" "inventory" { 2025-05-31 15:27:18.616421 | orchestrator | 15:27:18.616 STDOUT terraform:  + content = (known after apply) 2025-05-31 15:27:18.616455 | orchestrator | 15:27:18.616 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-31 15:27:18.616497 | orchestrator | 15:27:18.616 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-31 15:27:18.616525 | orchestrator | 15:27:18.616 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-31 15:27:18.616562 | orchestrator | 15:27:18.616 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-31 15:27:18.616594 | orchestrator | 15:27:18.616 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-31 15:27:18.616628 | orchestrator | 15:27:18.616 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-31 15:27:18.616651 | orchestrator | 15:27:18.616 STDOUT terraform:  + directory_permission = "0777" 2025-05-31 15:27:18.616675 | orchestrator | 15:27:18.616 STDOUT terraform:  + file_permission = "0644" 2025-05-31 15:27:18.616712 | orchestrator | 15:27:18.616 STDOUT terraform:  + filename = "inventory.ci" 2025-05-31 15:27:18.616751 | orchestrator | 15:27:18.616 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.616758 | orchestrator | 15:27:18.616 STDOUT terraform:  } 2025-05-31 15:27:18.616789 | orchestrator | 15:27:18.616 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-05-31 15:27:18.616818 | orchestrator | 15:27:18.616 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-05-31 15:27:18.616849 | orchestrator | 15:27:18.616 STDOUT terraform:  + content = (sensitive value) 2025-05-31 15:27:18.616883 | orchestrator | 15:27:18.616 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-31 15:27:18.616919 | orchestrator | 15:27:18.616 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-31 15:27:18.616952 | orchestrator | 15:27:18.616 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-31 15:27:18.616986 | orchestrator | 15:27:18.616 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-31 15:27:18.617067 | orchestrator | 15:27:18.616 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-31 15:27:18.617101 | orchestrator | 15:27:18.617 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-31 15:27:18.617125 | orchestrator | 15:27:18.617 STDOUT terraform:  + directory_permission = "0700" 2025-05-31 15:27:18.617149 | orchestrator | 15:27:18.617 STDOUT terraform:  + file_permission = "0600" 2025-05-31 15:27:18.617179 | orchestrator | 15:27:18.617 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-05-31 15:27:18.617216 | orchestrator | 15:27:18.617 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.617224 | orchestrator | 15:27:18.617 STDOUT terraform:  } 2025-05-31 15:27:18.617257 | orchestrator | 15:27:18.617 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-05-31 15:27:18.617286 | orchestrator | 15:27:18.617 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-05-31 15:27:18.617307 | orchestrator | 15:27:18.617 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.617313 | orchestrator | 15:27:18.617 STDOUT terraform:  } 2025-05-31 15:27:18.617367 | orchestrator | 15:27:18.617 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-05-31 15:27:18.617411 | orchestrator | 15:27:18.617 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-05-31 15:27:18.617449 | orchestrator | 15:27:18.617 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 15:27:18.617472 | orchestrator | 15:27:18.617 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 15:27:18.617510 | orchestrator | 15:27:18.617 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.617546 | orchestrator | 15:27:18.617 STDOUT terraform:  + image_id = (known after apply) 2025-05-31 15:27:18.617584 | orchestrator | 15:27:18.617 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 15:27:18.617626 | orchestrator | 15:27:18.617 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-05-31 15:27:18.617665 | orchestrator | 15:27:18.617 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.617683 | orchestrator | 15:27:18.617 STDOUT terraform:  + size = 80 2025-05-31 15:27:18.618626 | orchestrator | 15:27:18.617 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 15:27:18.618682 | orchestrator | 15:27:18.618 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 15:27:18.618698 | orchestrator | 15:27:18.618 STDOUT terraform:  } 2025-05-31 15:27:18.618768 | orchestrator | 15:27:18.618 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-05-31 15:27:18.618825 | orchestrator | 15:27:18.618 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-31 15:27:18.618862 | orchestrator | 15:27:18.618 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 15:27:18.618899 | orchestrator | 15:27:18.618 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 15:27:18.618935 | orchestrator | 15:27:18.618 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.618985 | orchestrator | 15:27:18.618 STDOUT terraform:  + image_id = (known after apply) 2025-05-31 15:27:18.619049 | orchestrator | 15:27:18.618 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 15:27:18.619092 | orchestrator | 15:27:18.619 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-05-31 15:27:18.619143 | orchestrator | 15:27:18.619 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.619167 | orchestrator | 15:27:18.619 STDOUT terraform:  + size = 80 2025-05-31 15:27:18.619205 | orchestrator | 15:27:18.619 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 15:27:18.619230 | orchestrator | 15:27:18.619 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 15:27:18.619244 | orchestrator | 15:27:18.619 STDOUT terraform:  } 2025-05-31 15:27:18.619305 | orchestrator | 15:27:18.619 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-05-31 15:27:18.619365 | orchestrator | 15:27:18.619 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-31 15:27:18.619403 | orchestrator | 15:27:18.619 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 15:27:18.619436 | orchestrator | 15:27:18.619 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 15:27:18.619472 | orchestrator | 15:27:18.619 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.619522 | orchestrator | 15:27:18.619 STDOUT terraform:  + image_id = (known after apply) 2025-05-31 15:27:18.619560 | orchestrator | 15:27:18.619 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 15:27:18.619617 | orchestrator | 15:27:18.619 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-05-31 15:27:18.619665 | orchestrator | 15:27:18.619 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.619696 | orchestrator | 15:27:18.619 STDOUT terraform:  + size = 80 2025-05-31 15:27:18.619719 | orchestrator | 15:27:18.619 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 15:27:18.619757 | orchestrator | 15:27:18.619 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 15:27:18.619771 | orchestrator | 15:27:18.619 STDOUT terraform:  } 2025-05-31 15:27:18.619830 | orchestrator | 15:27:18.619 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-05-31 15:27:18.619875 | orchestrator | 15:27:18.619 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-31 15:27:18.619924 | orchestrator | 15:27:18.619 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 15:27:18.619948 | orchestrator | 15:27:18.619 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 15:27:18.619997 | orchestrator | 15:27:18.619 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.620059 | orchestrator | 15:27:18.619 STDOUT terraform:  + image_id = (known after apply) 2025-05-31 15:27:18.620096 | orchestrator | 15:27:18.620 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 15:27:18.620152 | orchestrator | 15:27:18.620 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-05-31 15:27:18.620194 | orchestrator | 15:27:18.620 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.620220 | orchestrator | 15:27:18.620 STDOUT terraform:  + size = 80 2025-05-31 15:27:18.620245 | orchestrator | 15:27:18.620 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 15:27:18.620282 | orchestrator | 15:27:18.620 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 15:27:18.620288 | orchestrator | 15:27:18.620 STDOUT terraform:  } 2025-05-31 15:27:18.620338 | orchestrator | 15:27:18.620 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-05-31 15:27:18.620397 | orchestrator | 15:27:18.620 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-31 15:27:18.620446 | orchestrator | 15:27:18.620 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 15:27:18.620470 | orchestrator | 15:27:18.620 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 15:27:18.620519 | orchestrator | 15:27:18.620 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.620554 | orchestrator | 15:27:18.620 STDOUT terraform:  + image_id = (known after apply) 2025-05-31 15:27:18.620603 | orchestrator | 15:27:18.620 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 15:27:18.620651 | orchestrator | 15:27:18.620 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-05-31 15:27:18.620698 | orchestrator | 15:27:18.620 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.620711 | orchestrator | 15:27:18.620 STDOUT terraform:  + size = 80 2025-05-31 15:27:18.621643 | orchestrator | 15:27:18.620 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 15:27:18.621694 | orchestrator | 15:27:18.621 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 15:27:18.621710 | orchestrator | 15:27:18.621 STDOUT terraform:  } 2025-05-31 15:27:18.621762 | orchestrator | 15:27:18.621 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-05-31 15:27:18.621809 | orchestrator | 15:27:18.621 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-31 15:27:18.621845 | orchestrator | 15:27:18.621 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 15:27:18.621870 | orchestrator | 15:27:18.621 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 15:27:18.621906 | orchestrator | 15:27:18.621 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.621942 | orchestrator | 15:27:18.621 STDOUT terraform:  + image_id = (known after apply) 2025-05-31 15:27:18.621977 | orchestrator | 15:27:18.621 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 15:27:18.622065 | orchestrator | 15:27:18.621 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-05-31 15:27:18.622095 | orchestrator | 15:27:18.622 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.622117 | orchestrator | 15:27:18.622 STDOUT terraform:  + size = 80 2025-05-31 15:27:18.622145 | orchestrator | 15:27:18.622 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 15:27:18.622168 | orchestrator | 15:27:18.622 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 15:27:18.622174 | orchestrator | 15:27:18.622 STDOUT terraform:  } 2025-05-31 15:27:18.622226 | orchestrator | 15:27:18.622 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-05-31 15:27:18.622271 | orchestrator | 15:27:18.622 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-31 15:27:18.622306 | orchestrator | 15:27:18.622 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 15:27:18.622330 | orchestrator | 15:27:18.622 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 15:27:18.622366 | orchestrator | 15:27:18.622 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.622402 | orchestrator | 15:27:18.622 STDOUT terraform:  + image_id = (known after apply) 2025-05-31 15:27:18.622439 | orchestrator | 15:27:18.622 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 15:27:18.622483 | orchestrator | 15:27:18.622 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-05-31 15:27:18.622520 | orchestrator | 15:27:18.622 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.622540 | orchestrator | 15:27:18.622 STDOUT terraform:  + size = 80 2025-05-31 15:27:18.622564 | orchestrator | 15:27:18.622 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 15:27:18.622590 | orchestrator | 15:27:18.622 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 15:27:18.622596 | orchestrator | 15:27:18.622 STDOUT terraform:  } 2025-05-31 15:27:18.622644 | orchestrator | 15:27:18.622 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-05-31 15:27:18.622686 | orchestrator | 15:27:18.622 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-31 15:27:18.622721 | orchestrator | 15:27:18.622 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 15:27:18.622746 | orchestrator | 15:27:18.622 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 15:27:18.622782 | orchestrator | 15:27:18.622 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.622818 | orchestrator | 15:27:18.622 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 15:27:18.622857 | orchestrator | 15:27:18.622 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-05-31 15:27:18.622892 | orchestrator | 15:27:18.622 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.622913 | orchestrator | 15:27:18.622 STDOUT terraform:  + size = 20 2025-05-31 15:27:18.622937 | orchestrator | 15:27:18.622 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 15:27:18.622961 | orchestrator | 15:27:18.622 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 15:27:18.622967 | orchestrator | 15:27:18.622 STDOUT terraform:  } 2025-05-31 15:27:18.623026 | orchestrator | 15:27:18.622 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-05-31 15:27:18.623072 | orchestrator | 15:27:18.623 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-31 15:27:18.623104 | orchestrator | 15:27:18.623 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 15:27:18.623128 | orchestrator | 15:27:18.623 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 15:27:18.623164 | orchestrator | 15:27:18.623 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.623199 | orchestrator | 15:27:18.623 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 15:27:18.623238 | orchestrator | 15:27:18.623 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-05-31 15:27:18.623273 | orchestrator | 15:27:18.623 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.623295 | orchestrator | 15:27:18.623 STDOUT terraform:  + size = 20 2025-05-31 15:27:18.623318 | orchestrator | 15:27:18.623 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 15:27:18.623344 | orchestrator | 15:27:18.623 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 15:27:18.623350 | orchestrator | 15:27:18.623 STDOUT terraform:  } 2025-05-31 15:27:18.623398 | orchestrator | 15:27:18.623 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-05-31 15:27:18.623442 | orchestrator | 15:27:18.623 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-31 15:27:18.623478 | orchestrator | 15:27:18.623 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 15:27:18.623502 | orchestrator | 15:27:18.623 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 15:27:18.623539 | orchestrator | 15:27:18.623 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.623574 | orchestrator | 15:27:18.623 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 15:27:18.623612 | orchestrator | 15:27:18.623 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-05-31 15:27:18.623648 | orchestrator | 15:27:18.623 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.623669 | orchestrator | 15:27:18.623 STDOUT terraform:  + size = 20 2025-05-31 15:27:18.623692 | orchestrator | 15:27:18.623 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 15:27:18.623716 | orchestrator | 15:27:18.623 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 15:27:18.623723 | orchestrator | 15:27:18.623 STDOUT terraform:  } 2025-05-31 15:27:18.623770 | orchestrator | 15:27:18.623 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-05-31 15:27:18.623813 | orchestrator | 15:27:18.623 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-31 15:27:18.623849 | orchestrator | 15:27:18.623 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 15:27:18.623872 | orchestrator | 15:27:18.623 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 15:27:18.623908 | orchestrator | 15:27:18.623 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.623944 | orchestrator | 15:27:18.623 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 15:27:18.623982 | orchestrator | 15:27:18.623 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-05-31 15:27:18.624042 | orchestrator | 15:27:18.623 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.624061 | orchestrator | 15:27:18.624 STDOUT terraform:  + size = 20 2025-05-31 15:27:18.624093 | orchestrator | 15:27:18.624 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 15:27:18.624115 | orchestrator | 15:27:18.624 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 15:27:18.624121 | orchestrator | 15:27:18.624 STDOUT terraform:  } 2025-05-31 15:27:18.624168 | orchestrator | 15:27:18.624 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-05-31 15:27:18.624211 | orchestrator | 15:27:18.624 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-31 15:27:18.624247 | orchestrator | 15:27:18.624 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 15:27:18.624272 | orchestrator | 15:27:18.624 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 15:27:18.624309 | orchestrator | 15:27:18.624 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.624345 | orchestrator | 15:27:18.624 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 15:27:18.624383 | orchestrator | 15:27:18.624 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-05-31 15:27:18.624419 | orchestrator | 15:27:18.624 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.624441 | orchestrator | 15:27:18.624 STDOUT terraform:  + size = 20 2025-05-31 15:27:18.624467 | orchestrator | 15:27:18.624 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 15:27:18.624491 | orchestrator | 15:27:18.624 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 15:27:18.624502 | orchestrator | 15:27:18.624 STDOUT terraform:  } 2025-05-31 15:27:18.624546 | orchestrator | 15:27:18.624 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-05-31 15:27:18.624589 | orchestrator | 15:27:18.624 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-31 15:27:18.624624 | orchestrator | 15:27:18.624 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 15:27:18.624648 | orchestrator | 15:27:18.624 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 15:27:18.624684 | orchestrator | 15:27:18.624 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.624720 | orchestrator | 15:27:18.624 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 15:27:18.624759 | orchestrator | 15:27:18.624 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-05-31 15:27:18.624795 | orchestrator | 15:27:18.624 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.624816 | orchestrator | 15:27:18.624 STDOUT terraform:  + size = 20 2025-05-31 15:27:18.624839 | orchestrator | 15:27:18.624 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 15:27:18.624865 | orchestrator | 15:27:18.624 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 15:27:18.624872 | orchestrator | 15:27:18.624 STDOUT terraform:  } 2025-05-31 15:27:18.624918 | orchestrator | 15:27:18.624 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-05-31 15:27:18.624961 | orchestrator | 15:27:18.624 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-31 15:27:18.624996 | orchestrator | 15:27:18.624 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 15:27:18.625035 | orchestrator | 15:27:18.624 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 15:27:18.625063 | orchestrator | 15:27:18.625 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.625111 | orchestrator | 15:27:18.625 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 15:27:18.625149 | orchestrator | 15:27:18.625 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-05-31 15:27:18.625185 | orchestrator | 15:27:18.625 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.625206 | orchestrator | 15:27:18.625 STDOUT terraform:  + size = 20 2025-05-31 15:27:18.625233 | orchestrator | 15:27:18.625 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 15:27:18.625254 | orchestrator | 15:27:18.625 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 15:27:18.625260 | orchestrator | 15:27:18.625 STDOUT terraform:  } 2025-05-31 15:27:18.625309 | orchestrator | 15:27:18.625 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-05-31 15:27:18.625351 | orchestrator | 15:27:18.625 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-31 15:27:18.625386 | orchestrator | 15:27:18.625 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 15:27:18.625410 | orchestrator | 15:27:18.625 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 15:27:18.625446 | orchestrator | 15:27:18.625 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.625481 | orchestrator | 15:27:18.625 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 15:27:18.625520 | orchestrator | 15:27:18.625 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-05-31 15:27:18.625555 | orchestrator | 15:27:18.625 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.625577 | orchestrator | 15:27:18.625 STDOUT terraform:  + size = 20 2025-05-31 15:27:18.625601 | orchestrator | 15:27:18.625 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 15:27:18.625625 | orchestrator | 15:27:18.625 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 15:27:18.625631 | orchestrator | 15:27:18.625 STDOUT terraform:  } 2025-05-31 15:27:18.625680 | orchestrator | 15:27:18.625 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-05-31 15:27:18.625723 | orchestrator | 15:27:18.625 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-31 15:27:18.625759 | orchestrator | 15:27:18.625 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 15:27:18.625783 | orchestrator | 15:27:18.625 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 15:27:18.625821 | orchestrator | 15:27:18.625 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.625857 | orchestrator | 15:27:18.625 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 15:27:18.625896 | orchestrator | 15:27:18.625 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-05-31 15:27:18.625933 | orchestrator | 15:27:18.625 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.625960 | orchestrator | 15:27:18.625 STDOUT terraform:  + size = 20 2025-05-31 15:27:18.625982 | orchestrator | 15:27:18.625 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 15:27:18.626044 | orchestrator | 15:27:18.625 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 15:27:18.626052 | orchestrator | 15:27:18.626 STDOUT terraform:  } 2025-05-31 15:27:18.626078 | orchestrator | 15:27:18.626 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-05-31 15:27:18.626122 | orchestrator | 15:27:18.626 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-05-31 15:27:18.626157 | orchestrator | 15:27:18.626 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-31 15:27:18.626192 | orchestrator | 15:27:18.626 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-31 15:27:18.626229 | orchestrator | 15:27:18.626 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-31 15:27:18.626263 | orchestrator | 15:27:18.626 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 15:27:18.626287 | orchestrator | 15:27:18.626 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 15:27:18.626309 | orchestrator | 15:27:18.626 STDOUT terraform:  + config_drive = true 2025-05-31 15:27:18.626350 | orchestrator | 15:27:18.626 STDOUT terraform:  + created = (known after apply) 2025-05-31 15:27:18.626385 | orchestrator | 15:27:18.626 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-31 15:27:18.626415 | orchestrator | 15:27:18.626 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-05-31 15:27:18.626439 | orchestrator | 15:27:18.626 STDOUT terraform:  + force_delete = false 2025-05-31 15:27:18.626473 | orchestrator | 15:27:18.626 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-31 15:27:18.626510 | orchestrator | 15:27:18.626 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.626548 | orchestrator | 15:27:18.626 STDOUT terraform:  + image_id = (known after apply) 2025-05-31 15:27:18.626581 | orchestrator | 15:27:18.626 STDOUT terraform:  + image_name = (known after apply) 2025-05-31 15:27:18.626605 | orchestrator | 15:27:18.626 STDOUT terraform:  + key_pair = "testbed" 2025-05-31 15:27:18.626637 | orchestrator | 15:27:18.626 STDOUT terraform:  + name = "testbed-manager" 2025-05-31 15:27:18.626662 | orchestrator | 15:27:18.626 STDOUT terraform:  + power_state = "active" 2025-05-31 15:27:18.626697 | orchestrator | 15:27:18.626 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.626732 | orchestrator | 15:27:18.626 STDOUT terraform:  + security_groups = (known after apply) 2025-05-31 15:27:18.626755 | orchestrator | 15:27:18.626 STDOUT terraform:  + stop_before_destroy = false 2025-05-31 15:27:18.626791 | orchestrator | 15:27:18.626 STDOUT terraform:  + updated = (known after apply) 2025-05-31 15:27:18.626825 | orchestrator | 15:27:18.626 STDOUT terraform:  + user_data = (known after apply) 2025-05-31 15:27:18.626842 | orchestrator | 15:27:18.626 STDOUT terraform:  + block_device { 2025-05-31 15:27:18.626868 | orchestrator | 15:27:18.626 STDOUT terraform:  + boot_index = 0 2025-05-31 15:27:18.626895 | orchestrator | 15:27:18.626 STDOUT terraform:  + delete_on_termination = false 2025-05-31 15:27:18.626926 | orchestrator | 15:27:18.626 STDOUT terraform:  + destination_type = "volume" 2025-05-31 15:27:18.626954 | orchestrator | 15:27:18.626 STDOUT terraform:  + multiattach = false 2025-05-31 15:27:18.626985 | orchestrator | 15:27:18.626 STDOUT terraform:  + source_type = "volume" 2025-05-31 15:27:18.627033 | orchestrator | 15:27:18.626 STDOUT terraform:  + uuid = (known after apply) 2025-05-31 15:27:18.627040 | orchestrator | 15:27:18.627 STDOUT terraform:  } 2025-05-31 15:27:18.627071 | orchestrator | 15:27:18.627 STDOUT terraform:  + network { 2025-05-31 15:27:18.627092 | orchestrator | 15:27:18.627 STDOUT terraform:  + access_network = false 2025-05-31 15:27:18.627125 | orchestrator | 15:27:18.627 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-31 15:27:18.627165 | orchestrator | 15:27:18.627 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-31 15:27:18.627196 | orchestrator | 15:27:18.627 STDOUT terraform:  + mac = (known after apply) 2025-05-31 15:27:18.627230 | orchestrator | 15:27:18.627 STDOUT terraform:  + name = (known after apply) 2025-05-31 15:27:18.627262 | orchestrator | 15:27:18.627 STDOUT terraform:  + port = (known after apply) 2025-05-31 15:27:18.627294 | orchestrator | 15:27:18.627 STDOUT terraform:  + uuid = (known after apply) 2025-05-31 15:27:18.627306 | orchestrator | 15:27:18.627 STDOUT terraform:  } 2025-05-31 15:27:18.627312 | orchestrator | 15:27:18.627 STDOUT terraform:  } 2025-05-31 15:27:18.627357 | orchestrator | 15:27:18.627 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-05-31 15:27:18.627400 | orchestrator | 15:27:18.627 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-31 15:27:18.627437 | orchestrator | 15:27:18.627 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-31 15:27:18.627472 | orchestrator | 15:27:18.627 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-31 15:27:18.627506 | orchestrator | 15:27:18.627 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-31 15:27:18.627542 | orchestrator | 15:27:18.627 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 15:27:18.627567 | orchestrator | 15:27:18.627 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 15:27:18.627587 | orchestrator | 15:27:18.627 STDOUT terraform:  + config_drive = true 2025-05-31 15:27:18.627622 | orchestrator | 15:27:18.627 STDOUT terraform:  + created = (known after apply) 2025-05-31 15:27:18.627658 | orchestrator | 15:27:18.627 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-31 15:27:18.627687 | orchestrator | 15:27:18.627 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-31 15:27:18.627712 | orchestrator | 15:27:18.627 STDOUT terraform:  + force_delete = false 2025-05-31 15:27:18.627747 | orchestrator | 15:27:18.627 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-31 15:27:18.627783 | orchestrator | 15:27:18.627 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.627823 | orchestrator | 15:27:18.627 STDOUT terraform:  + image_id = (known after apply) 2025-05-31 15:27:18.627859 | orchestrator | 15:27:18.627 STDOUT terraform:  + image_name = (known after apply) 2025-05-31 15:27:18.627886 | orchestrator | 15:27:18.627 STDOUT terraform:  + key_pair = "testbed" 2025-05-31 15:27:18.627917 | orchestrator | 15:27:18.627 STDOUT terraform:  + name = "testbed-node-0" 2025-05-31 15:27:18.627942 | orchestrator | 15:27:18.627 STDOUT terraform:  + power_state = "active" 2025-05-31 15:27:18.627984 | orchestrator | 15:27:18.627 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.628039 | orchestrator | 15:27:18.627 STDOUT terraform:  + security_groups = (known after apply) 2025-05-31 15:27:18.628059 | orchestrator | 15:27:18.628 STDOUT terraform:  + stop_before_destroy = false 2025-05-31 15:27:18.628095 | orchestrator | 15:27:18.628 STDOUT terraform:  + updated = (known after apply) 2025-05-31 15:27:18.628145 | orchestrator | 15:27:18.628 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-31 15:27:18.628162 | orchestrator | 15:27:18.628 STDOUT terraform:  + block_device { 2025-05-31 15:27:18.628187 | orchestrator | 15:27:18.628 STDOUT terraform:  + boot_index = 0 2025-05-31 15:27:18.628215 | orchestrator | 15:27:18.628 STDOUT terraform:  + delete_on_termination = false 2025-05-31 15:27:18.628244 | orchestrator | 15:27:18.628 STDOUT terraform:  + destination_type = "volume" 2025-05-31 15:27:18.628272 | orchestrator | 15:27:18.628 STDOUT terraform:  + multiattach = false 2025-05-31 15:27:18.628302 | orchestrator | 15:27:18.628 STDOUT terraform:  + source_type = "volume" 2025-05-31 15:27:18.628341 | orchestrator | 15:27:18.628 STDOUT terraform:  + uuid = (known after apply) 2025-05-31 15:27:18.628356 | orchestrator | 15:27:18.628 STDOUT terraform:  } 2025-05-31 15:27:18.628372 | orchestrator | 15:27:18.628 STDOUT terraform:  + network { 2025-05-31 15:27:18.628393 | orchestrator | 15:27:18.628 STDOUT terraform:  + access_network = false 2025-05-31 15:27:18.628425 | orchestrator | 15:27:18.628 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-31 15:27:18.628456 | orchestrator | 15:27:18.628 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-31 15:27:18.628489 | orchestrator | 15:27:18.628 STDOUT terraform:  + mac = (known after apply) 2025-05-31 15:27:18.628520 | orchestrator | 15:27:18.628 STDOUT terraform:  + name = (known after apply) 2025-05-31 15:27:18.628552 | orchestrator | 15:27:18.628 STDOUT terraform:  + port = (known after apply) 2025-05-31 15:27:18.628613 | orchestrator | 15:27:18.628 STDOUT terraform:  + uuid = (known after apply) 2025-05-31 15:27:18.628628 | orchestrator | 15:27:18.628 STDOUT terraform:  } 2025-05-31 15:27:18.628643 | orchestrator | 15:27:18.628 STDOUT terraform:  } 2025-05-31 15:27:18.628687 | orchestrator | 15:27:18.628 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-05-31 15:27:18.628735 | orchestrator | 15:27:18.628 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-31 15:27:18.629228 | orchestrator | 15:27:18.628 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-31 15:27:18.629247 | orchestrator | 15:27:18.628 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-31 15:27:18.629252 | orchestrator | 15:27:18.628 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-31 15:27:18.629255 | orchestrator | 15:27:18.628 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 15:27:18.629260 | orchestrator | 15:27:18.628 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 15:27:18.629264 | orchestrator | 15:27:18.628 STDOUT terraform:  + config_drive = true 2025-05-31 15:27:18.629268 | orchestrator | 15:27:18.628 STDOUT terraform:  + created = (known after apply) 2025-05-31 15:27:18.629271 | orchestrator | 15:27:18.628 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-31 15:27:18.629275 | orchestrator | 15:27:18.628 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-31 15:27:18.629279 | orchestrator | 15:27:18.628 STDOUT terraform:  + force_delete = false 2025-05-31 15:27:18.629282 | orchestrator | 15:27:18.629 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-31 15:27:18.629286 | orchestrator | 15:27:18.629 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.629290 | orchestrator | 15:27:18.629 STDOUT terraform:  + image_id = (known after apply) 2025-05-31 15:27:18.629294 | orchestrator | 15:27:18.629 STDOUT terraform:  + image_name = (known after apply) 2025-05-31 15:27:18.629303 | orchestrator | 15:27:18.629 STDOUT terraform:  + key_pair = "testbed" 2025-05-31 15:27:18.629306 | orchestrator | 15:27:18.629 STDOUT terraform:  + name = "testbed-node-1" 2025-05-31 15:27:18.629310 | orchestrator | 15:27:18.629 STDOUT terraform:  + power_state = "active" 2025-05-31 15:27:18.629316 | orchestrator | 15:27:18.629 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.629320 | orchestrator | 15:27:18.629 STDOUT terraform:  + security_groups = (known after apply) 2025-05-31 15:27:18.629323 | orchestrator | 15:27:18.629 STDOUT terraform:  + stop_before_destroy = false 2025-05-31 15:27:18.629329 | orchestrator | 15:27:18.629 STDOUT terraform:  + updated = (known after apply) 2025-05-31 15:27:18.629373 | orchestrator | 15:27:18.629 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-31 15:27:18.629388 | orchestrator | 15:27:18.629 STDOUT terraform:  + block_device { 2025-05-31 15:27:18.629414 | orchestrator | 15:27:18.629 STDOUT terraform:  + boot_index = 0 2025-05-31 15:27:18.629442 | orchestrator | 15:27:18.629 STDOUT terraform:  + delete_on_termination = false 2025-05-31 15:27:18.629471 | orchestrator | 15:27:18.629 STDOUT terraform:  + destination_type = "volume" 2025-05-31 15:27:18.629499 | orchestrator | 15:27:18.629 STDOUT terraform:  + multiattach = false 2025-05-31 15:27:18.629533 | orchestrator | 15:27:18.629 STDOUT terraform:  + source_type = "volume" 2025-05-31 15:27:18.629568 | orchestrator | 15:27:18.629 STDOUT terraform:  + uuid = (known after apply) 2025-05-31 15:27:18.629582 | orchestrator | 15:27:18.629 STDOUT terraform:  } 2025-05-31 15:27:18.629598 | orchestrator | 15:27:18.629 STDOUT terraform:  + network { 2025-05-31 15:27:18.629619 | orchestrator | 15:27:18.629 STDOUT terraform:  + access_network = false 2025-05-31 15:27:18.629649 | orchestrator | 15:27:18.629 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-31 15:27:18.629681 | orchestrator | 15:27:18.629 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-31 15:27:18.629714 | orchestrator | 15:27:18.629 STDOUT terraform:  + mac = (known after apply) 2025-05-31 15:27:18.629746 | orchestrator | 15:27:18.629 STDOUT terraform:  + name = (known after apply) 2025-05-31 15:27:18.629778 | orchestrator | 15:27:18.629 STDOUT terraform:  + port = (known after apply) 2025-05-31 15:27:18.629809 | orchestrator | 15:27:18.629 STDOUT terraform:  + uuid = (known after apply) 2025-05-31 15:27:18.629824 | orchestrator | 15:27:18.629 STDOUT terraform:  } 2025-05-31 15:27:18.629838 | orchestrator | 15:27:18.629 STDOUT terraform:  } 2025-05-31 15:27:18.629883 | orchestrator | 15:27:18.629 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-05-31 15:27:18.629928 | orchestrator | 15:27:18.629 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-31 15:27:18.629964 | orchestrator | 15:27:18.629 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-31 15:27:18.629999 | orchestrator | 15:27:18.629 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-31 15:27:18.630092 | orchestrator | 15:27:18.629 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-31 15:27:18.630128 | orchestrator | 15:27:18.630 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 15:27:18.630152 | orchestrator | 15:27:18.630 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 15:27:18.630174 | orchestrator | 15:27:18.630 STDOUT terraform:  + config_drive = true 2025-05-31 15:27:18.630209 | orchestrator | 15:27:18.630 STDOUT terraform:  + created = (known after apply) 2025-05-31 15:27:18.630244 | orchestrator | 15:27:18.630 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-31 15:27:18.630275 | orchestrator | 15:27:18.630 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-31 15:27:18.630299 | orchestrator | 15:27:18.630 STDOUT terraform:  + force_delete = false 2025-05-31 15:27:18.630336 | orchestrator | 15:27:18.630 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-31 15:27:18.630372 | orchestrator | 15:27:18.630 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.630407 | orchestrator | 15:27:18.630 STDOUT terraform:  + image_id = (known after apply) 2025-05-31 15:27:18.630447 | orchestrator | 15:27:18.630 STDOUT terraform:  + image_name = (known after apply) 2025-05-31 15:27:18.630472 | orchestrator | 15:27:18.630 STDOUT terraform:  + key_pair = "testbed" 2025-05-31 15:27:18.630504 | orchestrator | 15:27:18.630 STDOUT terraform:  + name = "testbed-node-2" 2025-05-31 15:27:18.630528 | orchestrator | 15:27:18.630 STDOUT terraform:  + power_state = "active" 2025-05-31 15:27:18.630563 | orchestrator | 15:27:18.630 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.630598 | orchestrator | 15:27:18.630 STDOUT terraform:  + security_groups = (known after apply) 2025-05-31 15:27:18.630621 | orchestrator | 15:27:18.630 STDOUT terraform:  + stop_before_destroy = false 2025-05-31 15:27:18.630656 | orchestrator | 15:27:18.630 STDOUT terraform:  + updated = (known after apply) 2025-05-31 15:27:18.630708 | orchestrator | 15:27:18.630 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-31 15:27:18.630726 | orchestrator | 15:27:18.630 STDOUT terraform:  + block_device { 2025-05-31 15:27:18.630752 | orchestrator | 15:27:18.630 STDOUT terraform:  + boot_index = 0 2025-05-31 15:27:18.630780 | orchestrator | 15:27:18.630 STDOUT terraform:  + delete_on_termination = false 2025-05-31 15:27:18.630811 | orchestrator | 15:27:18.630 STDOUT terraform:  + destination_type = "volume" 2025-05-31 15:27:18.630840 | orchestrator | 15:27:18.630 STDOUT terraform:  + multiattach = false 2025-05-31 15:27:18.630869 | orchestrator | 15:27:18.630 STDOUT terraform:  + source_type = "volume" 2025-05-31 15:27:18.630907 | orchestrator | 15:27:18.630 STDOUT terraform:  + uuid = (known after apply) 2025-05-31 15:27:18.630922 | orchestrator | 15:27:18.630 STDOUT terraform:  } 2025-05-31 15:27:18.630937 | orchestrator | 15:27:18.630 STDOUT terraform:  + network { 2025-05-31 15:27:18.630959 | orchestrator | 15:27:18.630 STDOUT terraform:  + access_network = false 2025-05-31 15:27:18.630991 | orchestrator | 15:27:18.630 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-31 15:27:18.631033 | orchestrator | 15:27:18.630 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-31 15:27:18.631064 | orchestrator | 15:27:18.631 STDOUT terraform:  + mac = (known after apply) 2025-05-31 15:27:18.631096 | orchestrator | 15:27:18.631 STDOUT terraform:  + name = (known after apply) 2025-05-31 15:27:18.631128 | orchestrator | 15:27:18.631 STDOUT terraform:  + port = (known after apply) 2025-05-31 15:27:18.631160 | orchestrator | 15:27:18.631 STDOUT terraform:  + uuid = (known after apply) 2025-05-31 15:27:18.631168 | orchestrator | 15:27:18.631 STDOUT terraform:  } 2025-05-31 15:27:18.631184 | orchestrator | 15:27:18.631 STDOUT terraform:  } 2025-05-31 15:27:18.631288 | orchestrator | 15:27:18.631 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-05-31 15:27:18.631332 | orchestrator | 15:27:18.631 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-31 15:27:18.631367 | orchestrator | 15:27:18.631 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-31 15:27:18.631402 | orchestrator | 15:27:18.631 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-31 15:27:18.631443 | orchestrator | 15:27:18.631 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-31 15:27:18.631473 | orchestrator | 15:27:18.631 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 15:27:18.631499 | orchestrator | 15:27:18.631 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 15:27:18.631519 | orchestrator | 15:27:18.631 STDOUT terraform:  + config_drive = true 2025-05-31 15:27:18.631554 | orchestrator | 15:27:18.631 STDOUT terraform:  + created = (known after apply) 2025-05-31 15:27:18.631590 | orchestrator | 15:27:18.631 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-31 15:27:18.631619 | orchestrator | 15:27:18.631 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-31 15:27:18.631643 | orchestrator | 15:27:18.631 STDOUT terraform:  + force_delete = false 2025-05-31 15:27:18.631678 | orchestrator | 15:27:18.631 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-31 15:27:18.631716 | orchestrator | 15:27:18.631 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.631751 | orchestrator | 15:27:18.631 STDOUT terraform:  + image_id = (known after apply) 2025-05-31 15:27:18.631787 | orchestrator | 15:27:18.631 STDOUT terraform:  + image_name = (known after apply) 2025-05-31 15:27:18.631812 | orchestrator | 15:27:18.631 STDOUT terraform:  + key_pair = "testbed" 2025-05-31 15:27:18.631843 | orchestrator | 15:27:18.631 STDOUT terraform:  + name = "testbed-node-3" 2025-05-31 15:27:18.631867 | orchestrator | 15:27:18.631 STDOUT terraform:  + power_state = "active" 2025-05-31 15:27:18.631902 | orchestrator | 15:27:18.631 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.631936 | orchestrator | 15:27:18.631 STDOUT terraform:  + security_groups = (known after apply) 2025-05-31 15:27:18.631959 | orchestrator | 15:27:18.631 STDOUT terraform:  + stop_before_destroy = false 2025-05-31 15:27:18.631997 | orchestrator | 15:27:18.631 STDOUT terraform:  + updated = (known after apply) 2025-05-31 15:27:18.632057 | orchestrator | 15:27:18.631 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-31 15:27:18.632073 | orchestrator | 15:27:18.632 STDOUT terraform:  + block_device { 2025-05-31 15:27:18.632098 | orchestrator | 15:27:18.632 STDOUT terraform:  + boot_index = 0 2025-05-31 15:27:18.632126 | orchestrator | 15:27:18.632 STDOUT terraform:  + delete_on_termination = false 2025-05-31 15:27:18.632156 | orchestrator | 15:27:18.632 STDOUT terraform:  + destination_type = "volume" 2025-05-31 15:27:18.632186 | orchestrator | 15:27:18.632 STDOUT terraform:  + multiattach = false 2025-05-31 15:27:18.632217 | orchestrator | 15:27:18.632 STDOUT terraform:  + source_type = "volume" 2025-05-31 15:27:18.632254 | orchestrator | 15:27:18.632 STDOUT terraform:  + uuid = (known after apply) 2025-05-31 15:27:18.632268 | orchestrator | 15:27:18.632 STDOUT terraform:  } 2025-05-31 15:27:18.632288 | orchestrator | 15:27:18.632 STDOUT terraform:  + network { 2025-05-31 15:27:18.632308 | orchestrator | 15:27:18.632 STDOUT terraform:  + access_network = false 2025-05-31 15:27:18.632339 | orchestrator | 15:27:18.632 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-31 15:27:18.632370 | orchestrator | 15:27:18.632 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-31 15:27:18.632400 | orchestrator | 15:27:18.632 STDOUT terraform:  + mac = (known after apply) 2025-05-31 15:27:18.632434 | orchestrator | 15:27:18.632 STDOUT terraform:  + name = (known after apply) 2025-05-31 15:27:18.632464 | orchestrator | 15:27:18.632 STDOUT terraform:  + port = (known after apply) 2025-05-31 15:27:18.632495 | orchestrator | 15:27:18.632 STDOUT terraform:  + uuid = (known after apply) 2025-05-31 15:27:18.632509 | orchestrator | 15:27:18.632 STDOUT terraform:  } 2025-05-31 15:27:18.632523 | orchestrator | 15:27:18.632 STDOUT terraform:  } 2025-05-31 15:27:18.632567 | orchestrator | 15:27:18.632 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-05-31 15:27:18.632608 | orchestrator | 15:27:18.632 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-31 15:27:18.632643 | orchestrator | 15:27:18.632 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-31 15:27:18.632677 | orchestrator | 15:27:18.632 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-31 15:27:18.632712 | orchestrator | 15:27:18.632 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-31 15:27:18.632748 | orchestrator | 15:27:18.632 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 15:27:18.632771 | orchestrator | 15:27:18.632 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 15:27:18.632792 | orchestrator | 15:27:18.632 STDOUT terraform:  + config_drive = true 2025-05-31 15:27:18.632827 | orchestrator | 15:27:18.632 STDOUT terraform:  + created = (known after apply) 2025-05-31 15:27:18.632871 | orchestrator | 15:27:18.632 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-31 15:27:18.632900 | orchestrator | 15:27:18.632 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-31 15:27:18.632925 | orchestrator | 15:27:18.632 STDOUT terraform:  + force_delete = false 2025-05-31 15:27:18.632959 | orchestrator | 15:27:18.632 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-31 15:27:18.632994 | orchestrator | 15:27:18.632 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.633053 | orchestrator | 15:27:18.632 STDOUT terraform:  + image_id = (known after apply) 2025-05-31 15:27:18.633087 | orchestrator | 15:27:18.633 STDOUT terraform:  + image_name = (known after apply) 2025-05-31 15:27:18.633115 | orchestrator | 15:27:18.633 STDOUT terraform:  + key_pair = "testbed" 2025-05-31 15:27:18.633144 | orchestrator | 15:27:18.633 STDOUT terraform:  + name = "testbed-node-4" 2025-05-31 15:27:18.633170 | orchestrator | 15:27:18.633 STDOUT terraform:  + power_state = "active" 2025-05-31 15:27:18.633205 | orchestrator | 15:27:18.633 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.633240 | orchestrator | 15:27:18.633 STDOUT terraform:  + security_groups = (known after apply) 2025-05-31 15:27:18.633264 | orchestrator | 15:27:18.633 STDOUT terraform:  + stop_before_destroy = false 2025-05-31 15:27:18.633306 | orchestrator | 15:27:18.633 STDOUT terraform:  + updated = (known after apply) 2025-05-31 15:27:18.633356 | orchestrator | 15:27:18.633 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-31 15:27:18.633373 | orchestrator | 15:27:18.633 STDOUT terraform:  + block_device { 2025-05-31 15:27:18.633398 | orchestrator | 15:27:18.633 STDOUT terraform:  + boot_index = 0 2025-05-31 15:27:18.633425 | orchestrator | 15:27:18.633 STDOUT terraform:  + delete_on_termination = false 2025-05-31 15:27:18.633457 | orchestrator | 15:27:18.633 STDOUT terraform:  + destination_type = "volume" 2025-05-31 15:27:18.633490 | orchestrator | 15:27:18.633 STDOUT terraform:  + multiattach = false 2025-05-31 15:27:18.633515 | orchestrator | 15:27:18.633 STDOUT terraform:  + source_type = "volume" 2025-05-31 15:27:18.633555 | orchestrator | 15:27:18.633 STDOUT terraform:  + uuid = (known after apply) 2025-05-31 15:27:18.633568 | orchestrator | 15:27:18.633 STDOUT terraform:  } 2025-05-31 15:27:18.633584 | orchestrator | 15:27:18.633 STDOUT terraform:  + network { 2025-05-31 15:27:18.633604 | orchestrator | 15:27:18.633 STDOUT terraform:  + access_network = false 2025-05-31 15:27:18.633635 | orchestrator | 15:27:18.633 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-31 15:27:18.633666 | orchestrator | 15:27:18.633 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-31 15:27:18.633697 | orchestrator | 15:27:18.633 STDOUT terraform:  + mac = (known after apply) 2025-05-31 15:27:18.633729 | orchestrator | 15:27:18.633 STDOUT terraform:  + name = (known after apply) 2025-05-31 15:27:18.633761 | orchestrator | 15:27:18.633 STDOUT terraform:  + port = (known after apply) 2025-05-31 15:27:18.633791 | orchestrator | 15:27:18.633 STDOUT terraform:  + uuid = (known after apply) 2025-05-31 15:27:18.633804 | orchestrator | 15:27:18.633 STDOUT terraform:  } 2025-05-31 15:27:18.633809 | orchestrator | 15:27:18.633 STDOUT terraform:  } 2025-05-31 15:27:18.633855 | orchestrator | 15:27:18.633 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-05-31 15:27:18.633897 | orchestrator | 15:27:18.633 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-31 15:27:18.633931 | orchestrator | 15:27:18.633 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-31 15:27:18.633967 | orchestrator | 15:27:18.633 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-31 15:27:18.634358 | orchestrator | 15:27:18.633 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-31 15:27:18.634377 | orchestrator | 15:27:18.634 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 15:27:18.634384 | orchestrator | 15:27:18.634 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 15:27:18.634388 | orchestrator | 15:27:18.634 STDOUT terraform:  + config_drive = true 2025-05-31 15:27:18.634405 | orchestrator | 15:27:18.634 STDOUT terraform:  + created = (known after apply) 2025-05-31 15:27:18.634443 | orchestrator | 15:27:18.634 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-31 15:27:18.634473 | orchestrator | 15:27:18.634 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-31 15:27:18.634498 | orchestrator | 15:27:18.634 STDOUT terraform:  + force_delete = false 2025-05-31 15:27:18.634533 | orchestrator | 15:27:18.634 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-31 15:27:18.634568 | orchestrator | 15:27:18.634 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.634607 | orchestrator | 15:27:18.634 STDOUT terraform:  + image_id = (known after apply) 2025-05-31 15:27:18.634640 | orchestrator | 15:27:18.634 STDOUT terraform:  + image_name = (known after apply) 2025-05-31 15:27:18.634664 | orchestrator | 15:27:18.634 STDOUT terraform:  + key_pair = "testbed" 2025-05-31 15:27:18.634695 | orchestrator | 15:27:18.634 STDOUT terraform:  + name = "testbed-node-5" 2025-05-31 15:27:18.634720 | orchestrator | 15:27:18.634 STDOUT terraform:  + power_state = "active" 2025-05-31 15:27:18.634756 | orchestrator | 15:27:18.634 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.634791 | orchestrator | 15:27:18.634 STDOUT terraform:  + security_groups = (known after apply) 2025-05-31 15:27:18.634814 | orchestrator | 15:27:18.634 STDOUT terraform:  + stop_before_destroy = false 2025-05-31 15:27:18.634850 | orchestrator | 15:27:18.634 STDOUT terraform:  + updated = (known after apply) 2025-05-31 15:27:18.634903 | orchestrator | 15:27:18.634 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-31 15:27:18.634919 | orchestrator | 15:27:18.634 STDOUT terraform:  + block_device { 2025-05-31 15:27:18.634945 | orchestrator | 15:27:18.634 STDOUT terraform:  + boot_index = 0 2025-05-31 15:27:18.634972 | orchestrator | 15:27:18.634 STDOUT terraform:  + delete_on_termination = false 2025-05-31 15:27:18.635015 | orchestrator | 15:27:18.634 STDOUT terraform:  + destination_type = "volume" 2025-05-31 15:27:18.635039 | orchestrator | 15:27:18.634 STDOUT terraform:  + multiattach = false 2025-05-31 15:27:18.635069 | orchestrator | 15:27:18.635 STDOUT terraform:  + source_type = "volume" 2025-05-31 15:27:18.635108 | orchestrator | 15:27:18.635 STDOUT terraform:  + uuid = (known after apply) 2025-05-31 15:27:18.635127 | orchestrator | 15:27:18.635 STDOUT terraform:  } 2025-05-31 15:27:18.635143 | orchestrator | 15:27:18.635 STDOUT terraform:  + network { 2025-05-31 15:27:18.635163 | orchestrator | 15:27:18.635 STDOUT terraform:  + access_network = false 2025-05-31 15:27:18.635194 | orchestrator | 15:27:18.635 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-31 15:27:18.635224 | orchestrator | 15:27:18.635 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-31 15:27:18.635255 | orchestrator | 15:27:18.635 STDOUT terraform:  + mac = (known after apply) 2025-05-31 15:27:18.635287 | orchestrator | 15:27:18.635 STDOUT terraform:  + name = (known after apply) 2025-05-31 15:27:18.635319 | orchestrator | 15:27:18.635 STDOUT terraform:  + port = (known after apply) 2025-05-31 15:27:18.635350 | orchestrator | 15:27:18.635 STDOUT terraform:  + uuid = (known after apply) 2025-05-31 15:27:18.635357 | orchestrator | 15:27:18.635 STDOUT terraform:  } 2025-05-31 15:27:18.635374 | orchestrator | 15:27:18.635 STDOUT terraform:  } 2025-05-31 15:27:18.635410 | orchestrator | 15:27:18.635 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-05-31 15:27:18.635444 | orchestrator | 15:27:18.635 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-05-31 15:27:18.635472 | orchestrator | 15:27:18.635 STDOUT terraform:  + fingerprint = (known after apply) 2025-05-31 15:27:18.635501 | orchestrator | 15:27:18.635 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.635523 | orchestrator | 15:27:18.635 STDOUT terraform:  + name = "testbed" 2025-05-31 15:27:18.635547 | orchestrator | 15:27:18.635 STDOUT terraform:  + private_key = (sensitive value) 2025-05-31 15:27:18.635575 | orchestrator | 15:27:18.635 STDOUT terraform:  + public_key = (known after apply) 2025-05-31 15:27:18.635603 | orchestrator | 15:27:18.635 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.635637 | orchestrator | 15:27:18.635 STDOUT terraform:  + user_id = (known after apply) 2025-05-31 15:27:18.635644 | orchestrator | 15:27:18.635 STDOUT terraform:  } 2025-05-31 15:27:18.635697 | orchestrator | 15:27:18.635 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-05-31 15:27:18.635747 | orchestrator | 15:27:18.635 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-31 15:27:18.635776 | orchestrator | 15:27:18.635 STDOUT terraform:  + device = (known after apply) 2025-05-31 15:27:18.635807 | orchestrator | 15:27:18.635 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.635834 | orchestrator | 15:27:18.635 STDOUT terraform:  + instance_id = (known after apply) 2025-05-31 15:27:18.635862 | orchestrator | 15:27:18.635 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.635890 | orchestrator | 15:27:18.635 STDOUT terraform:  + volume_id = (known after apply) 2025-05-31 15:27:18.635905 | orchestrator | 15:27:18.635 STDOUT terraform:  } 2025-05-31 15:27:18.635950 | orchestrator | 15:27:18.635 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-05-31 15:27:18.636041 | orchestrator | 15:27:18.635 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-31 15:27:18.636050 | orchestrator | 15:27:18.635 STDOUT terraform:  + device = (known after apply) 2025-05-31 15:27:18.636075 | orchestrator | 15:27:18.636 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.636103 | orchestrator | 15:27:18.636 STDOUT terraform:  + instance_id = (known after apply) 2025-05-31 15:27:18.636133 | orchestrator | 15:27:18.636 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.636162 | orchestrator | 15:27:18.636 STDOUT terraform:  + volume_id = (known after apply) 2025-05-31 15:27:18.636176 | orchestrator | 15:27:18.636 STDOUT terraform:  } 2025-05-31 15:27:18.636226 | orchestrator | 15:27:18.636 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-05-31 15:27:18.636275 | orchestrator | 15:27:18.636 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-31 15:27:18.636304 | orchestrator | 15:27:18.636 STDOUT terraform:  + device = (known after apply) 2025-05-31 15:27:18.636332 | orchestrator | 15:27:18.636 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.636361 | orchestrator | 15:27:18.636 STDOUT terraform:  + instance_id = (known after apply) 2025-05-31 15:27:18.636391 | orchestrator | 15:27:18.636 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.636421 | orchestrator | 15:27:18.636 STDOUT terraform:  + volume_id = (known after apply) 2025-05-31 15:27:18.636428 | orchestrator | 15:27:18.636 STDOUT terraform:  } 2025-05-31 15:27:18.636481 | orchestrator | 15:27:18.636 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-05-31 15:27:18.636529 | orchestrator | 15:27:18.636 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-31 15:27:18.636558 | orchestrator | 15:27:18.636 STDOUT terraform:  + device = (known after apply) 2025-05-31 15:27:18.636587 | orchestrator | 15:27:18.636 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.636616 | orchestrator | 15:27:18.636 STDOUT terraform:  + instance_id = (known after apply) 2025-05-31 15:27:18.636645 | orchestrator | 15:27:18.636 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.636672 | orchestrator | 15:27:18.636 STDOUT terraform:  + volume_id = (known after apply) 2025-05-31 15:27:18.636686 | orchestrator | 15:27:18.636 STDOUT terraform:  } 2025-05-31 15:27:18.636736 | orchestrator | 15:27:18.636 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-05-31 15:27:18.636785 | orchestrator | 15:27:18.636 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-31 15:27:18.636813 | orchestrator | 15:27:18.636 STDOUT terraform:  + device = (known after apply) 2025-05-31 15:27:18.636842 | orchestrator | 15:27:18.636 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.636871 | orchestrator | 15:27:18.636 STDOUT terraform:  + instance_id = (known after apply) 2025-05-31 15:27:18.636901 | orchestrator | 15:27:18.636 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.636932 | orchestrator | 15:27:18.636 STDOUT terraform:  + volume_id = (known after apply) 2025-05-31 15:27:18.636938 | orchestrator | 15:27:18.636 STDOUT terraform:  } 2025-05-31 15:27:18.636988 | orchestrator | 15:27:18.636 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-05-31 15:27:18.637049 | orchestrator | 15:27:18.636 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-31 15:27:18.637079 | orchestrator | 15:27:18.637 STDOUT terraform:  + device = (known after apply) 2025-05-31 15:27:18.637109 | orchestrator | 15:27:18.637 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.637137 | orchestrator | 15:27:18.637 STDOUT terraform:  + instance_id = (known after apply) 2025-05-31 15:27:18.637165 | orchestrator | 15:27:18.637 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.637193 | orchestrator | 15:27:18.637 STDOUT terraform:  + volume_id = (known after apply) 2025-05-31 15:27:18.637207 | orchestrator | 15:27:18.637 STDOUT terraform:  } 2025-05-31 15:27:18.637257 | orchestrator | 15:27:18.637 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-05-31 15:27:18.637306 | orchestrator | 15:27:18.637 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-31 15:27:18.637334 | orchestrator | 15:27:18.637 STDOUT terraform:  + device = (known after apply) 2025-05-31 15:27:18.637362 | orchestrator | 15:27:18.637 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.637390 | orchestrator | 15:27:18.637 STDOUT terraform:  + instance_id = (known after apply) 2025-05-31 15:27:18.637419 | orchestrator | 15:27:18.637 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.637448 | orchestrator | 15:27:18.637 STDOUT terraform:  + volume_id = (known after apply) 2025-05-31 15:27:18.637454 | orchestrator | 15:27:18.637 STDOUT terraform:  } 2025-05-31 15:27:18.637505 | orchestrator | 15:27:18.637 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-05-31 15:27:18.637553 | orchestrator | 15:27:18.637 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-31 15:27:18.637582 | orchestrator | 15:27:18.637 STDOUT terraform:  + device = (known after apply) 2025-05-31 15:27:18.637610 | orchestrator | 15:27:18.637 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.637638 | orchestrator | 15:27:18.637 STDOUT terraform:  + instance_id = (known after apply) 2025-05-31 15:27:18.637666 | orchestrator | 15:27:18.637 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.637694 | orchestrator | 15:27:18.637 STDOUT terraform:  + volume_id = (known after apply) 2025-05-31 15:27:18.637701 | orchestrator | 15:27:18.637 STDOUT terraform:  } 2025-05-31 15:27:18.637756 | orchestrator | 15:27:18.637 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-05-31 15:27:18.637805 | orchestrator | 15:27:18.637 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-31 15:27:18.637831 | orchestrator | 15:27:18.637 STDOUT terraform:  + device = (known after apply) 2025-05-31 15:27:18.637861 | orchestrator | 15:27:18.637 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.637889 | orchestrator | 15:27:18.637 STDOUT terraform:  + instance_id = (known after apply) 2025-05-31 15:27:18.637917 | orchestrator | 15:27:18.637 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.637945 | orchestrator | 15:27:18.637 STDOUT terraform:  + volume_id = (known after apply) 2025-05-31 15:27:18.637951 | orchestrator | 15:27:18.637 STDOUT terraform:  } 2025-05-31 15:27:18.638043 | orchestrator | 15:27:18.637 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-05-31 15:27:18.638180 | orchestrator | 15:27:18.638 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-05-31 15:27:18.638210 | orchestrator | 15:27:18.638 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-31 15:27:18.638238 | orchestrator | 15:27:18.638 STDOUT terraform:  + floating_ip = (known after apply) 2025-05-31 15:27:18.638268 | orchestrator | 15:27:18.638 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.638296 | orchestrator | 15:27:18.638 STDOUT terraform:  + port_id = (known after apply) 2025-05-31 15:27:18.638325 | orchestrator | 15:27:18.638 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.638331 | orchestrator | 15:27:18.638 STDOUT terraform:  } 2025-05-31 15:27:18.638384 | orchestrator | 15:27:18.638 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-05-31 15:27:18.638432 | orchestrator | 15:27:18.638 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-05-31 15:27:18.638456 | orchestrator | 15:27:18.638 STDOUT terraform:  + address = (known after apply) 2025-05-31 15:27:18.638483 | orchestrator | 15:27:18.638 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 15:27:18.638507 | orchestrator | 15:27:18.638 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-31 15:27:18.638532 | orchestrator | 15:27:18.638 STDOUT terraform:  + dns_name = (known after apply) 2025-05-31 15:27:18.638556 | orchestrator | 15:27:18.638 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-31 15:27:18.638582 | orchestrator | 15:27:18.638 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.638603 | orchestrator | 15:27:18.638 STDOUT terraform:  + pool = "public" 2025-05-31 15:27:18.638628 | orchestrator | 15:27:18.638 STDOUT terraform:  + port_id = (known after apply) 2025-05-31 15:27:18.642098 | orchestrator | 15:27:18.638 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.642138 | orchestrator | 15:27:18.638 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-31 15:27:18.642145 | orchestrator | 15:27:18.638 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 15:27:18.642149 | orchestrator | 15:27:18.638 STDOUT terraform:  } 2025-05-31 15:27:18.642153 | orchestrator | 15:27:18.638 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-05-31 15:27:18.642164 | orchestrator | 15:27:18.638 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-05-31 15:27:18.642169 | orchestrator | 15:27:18.638 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-31 15:27:18.642173 | orchestrator | 15:27:18.638 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 15:27:18.642176 | orchestrator | 15:27:18.638 STDOUT terraform:  + availability_zone_hints = [ 2025-05-31 15:27:18.642180 | orchestrator | 15:27:18.638 STDOUT terraform:  + "nova", 2025-05-31 15:27:18.642184 | orchestrator | 15:27:18.638 STDOUT terraform:  ] 2025-05-31 15:27:18.642188 | orchestrator | 15:27:18.638 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-31 15:27:18.642191 | orchestrator | 15:27:18.638 STDOUT terraform:  + external = (known after apply) 2025-05-31 15:27:18.642195 | orchestrator | 15:27:18.638 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.642198 | orchestrator | 15:27:18.639 STDOUT terraform:  + mtu = (known after apply) 2025-05-31 15:27:18.642202 | orchestrator | 15:27:18.639 STDOUT terraform:  + name = "net-testbed-management" 2025-05-31 15:27:18.642206 | orchestrator | 15:27:18.639 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-31 15:27:18.642210 | orchestrator | 15:27:18.639 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-31 15:27:18.642213 | orchestrator | 15:27:18.639 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.642217 | orchestrator | 15:27:18.639 STDOUT terraform:  + shared = (known after apply) 2025-05-31 15:27:18.642221 | orchestrator | 15:27:18.639 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 15:27:18.642224 | orchestrator | 15:27:18.639 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-05-31 15:27:18.642230 | orchestrator | 15:27:18.639 STDOUT terraform:  + segments (known after apply) 2025-05-31 15:27:18.642234 | orchestrator | 15:27:18.639 STDOUT terraform:  } 2025-05-31 15:27:18.642238 | orchestrator | 15:27:18.639 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-05-31 15:27:18.642242 | orchestrator | 15:27:18.639 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-05-31 15:27:18.642245 | orchestrator | 15:27:18.639 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-31 15:27:18.642249 | orchestrator | 15:27:18.639 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-31 15:27:18.642253 | orchestrator | 15:27:18.639 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-31 15:27:18.642257 | orchestrator | 15:27:18.639 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 15:27:18.642260 | orchestrator | 15:27:18.639 STDOUT terraform:  + device_id = (known after apply) 2025-05-31 15:27:18.642264 | orchestrator | 15:27:18.639 STDOUT terraform:  + device_owner = (known after apply) 2025-05-31 15:27:18.642268 | orchestrator | 15:27:18.639 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-31 15:27:18.642271 | orchestrator | 15:27:18.639 STDOUT terraform:  + dns_name = (known after apply) 2025-05-31 15:27:18.642278 | orchestrator | 15:27:18.639 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.642282 | orchestrator | 15:27:18.639 STDOUT terraform:  + mac_address = (known after apply) 2025-05-31 15:27:18.642294 | orchestrator | 15:27:18.639 STDOUT terraform:  + network_id = (known after apply) 2025-05-31 15:27:18.642297 | orchestrator | 15:27:18.639 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-31 15:27:18.642301 | orchestrator | 15:27:18.639 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-31 15:27:18.642305 | orchestrator | 15:27:18.639 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.642308 | orchestrator | 15:27:18.639 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-31 15:27:18.642312 | orchestrator | 15:27:18.639 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 15:27:18.642316 | orchestrator | 15:27:18.639 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 15:27:18.642320 | orchestrator | 15:27:18.639 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-31 15:27:18.642323 | orchestrator | 15:27:18.639 STDOUT terraform:  } 2025-05-31 15:27:18.642327 | orchestrator | 15:27:18.639 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 15:27:18.642331 | orchestrator | 15:27:18.639 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-31 15:27:18.642334 | orchestrator | 15:27:18.640 STDOUT terraform:  } 2025-05-31 15:27:18.642338 | orchestrator | 15:27:18.640 STDOUT terraform:  + binding (known after apply) 2025-05-31 15:27:18.642342 | orchestrator | 15:27:18.640 STDOUT terraform:  + fixed_ip { 2025-05-31 15:27:18.642346 | orchestrator | 15:27:18.640 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-05-31 15:27:18.642349 | orchestrator | 15:27:18.640 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-31 15:27:18.642353 | orchestrator | 15:27:18.640 STDOUT terraform:  } 2025-05-31 15:27:18.642357 | orchestrator | 15:27:18.640 STDOUT terraform:  } 2025-05-31 15:27:18.642361 | orchestrator | 15:27:18.640 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-05-31 15:27:18.642364 | orchestrator | 15:27:18.640 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-31 15:27:18.642368 | orchestrator | 15:27:18.640 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-31 15:27:18.642372 | orchestrator | 15:27:18.640 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-31 15:27:18.642375 | orchestrator | 15:27:18.640 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-31 15:27:18.642379 | orchestrator | 15:27:18.640 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 15:27:18.642383 | orchestrator | 15:27:18.640 STDOUT terraform:  + device_id = (known after apply) 2025-05-31 15:27:18.642386 | orchestrator | 15:27:18.640 STDOUT terraform:  + device_owner = (known after apply) 2025-05-31 15:27:18.642390 | orchestrator | 15:27:18.640 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-31 15:27:18.642397 | orchestrator | 15:27:18.640 STDOUT terraform:  + dns_name = (known after apply) 2025-05-31 15:27:18.642400 | orchestrator | 15:27:18.640 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.642404 | orchestrator | 15:27:18.640 STDOUT terraform:  + mac_address = (known after apply) 2025-05-31 15:27:18.642408 | orchestrator | 15:27:18.640 STDOUT terraform:  + network_id = (known after apply) 2025-05-31 15:27:18.642412 | orchestrator | 15:27:18.640 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-31 15:27:18.642415 | orchestrator | 15:27:18.640 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-31 15:27:18.642419 | orchestrator | 15:27:18.640 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.642423 | orchestrator | 15:27:18.640 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-31 15:27:18.642427 | orchestrator | 15:27:18.640 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 15:27:18.642430 | orchestrator | 15:27:18.640 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 15:27:18.642440 | orchestrator | 15:27:18.640 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-31 15:27:18.642444 | orchestrator | 15:27:18.640 STDOUT terraform:  } 2025-05-31 15:27:18.642448 | orchestrator | 15:27:18.640 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 15:27:18.642452 | orchestrator | 15:27:18.640 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-31 15:27:18.642455 | orchestrator | 15:27:18.640 STDOUT terraform:  } 2025-05-31 15:27:18.642459 | orchestrator | 15:27:18.640 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 15:27:18.642463 | orchestrator | 15:27:18.640 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-31 15:27:18.642467 | orchestrator | 15:27:18.640 STDOUT terraform:  } 2025-05-31 15:27:18.642470 | orchestrator | 15:27:18.640 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 15:27:18.642474 | orchestrator | 15:27:18.640 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-31 15:27:18.642478 | orchestrator | 15:27:18.640 STDOUT terraform:  } 2025-05-31 15:27:18.642482 | orchestrator | 15:27:18.640 STDOUT terraform:  + binding (known after apply) 2025-05-31 15:27:18.642485 | orchestrator | 15:27:18.640 STDOUT terraform:  + fixed_ip { 2025-05-31 15:27:18.642489 | orchestrator | 15:27:18.640 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-05-31 15:27:18.642493 | orchestrator | 15:27:18.640 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-31 15:27:18.642497 | orchestrator | 15:27:18.640 STDOUT terraform:  } 2025-05-31 15:27:18.642500 | orchestrator | 15:27:18.641 STDOUT terraform:  } 2025-05-31 15:27:18.642504 | orchestrator | 15:27:18.641 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-05-31 15:27:18.642508 | orchestrator | 15:27:18.641 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-31 15:27:18.642512 | orchestrator | 15:27:18.641 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-31 15:27:18.642515 | orchestrator | 15:27:18.641 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-31 15:27:18.642523 | orchestrator | 15:27:18.641 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-31 15:27:18.642530 | orchestrator | 15:27:18.641 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 15:27:18.642533 | orchestrator | 15:27:18.641 STDOUT terraform:  + device_id = (known after apply) 2025-05-31 15:27:18.642537 | orchestrator | 15:27:18.641 STDOUT terraform:  + device_owner = (known after apply) 2025-05-31 15:27:18.642543 | orchestrator | 15:27:18.641 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-31 15:27:18.642547 | orchestrator | 15:27:18.641 STDOUT terraform:  + dns_name = (known after apply) 2025-05-31 15:27:18.642551 | orchestrator | 15:27:18.641 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.642554 | orchestrator | 15:27:18.641 STDOUT terraform:  + mac_address = (known after apply) 2025-05-31 15:27:18.642558 | orchestrator | 15:27:18.641 STDOUT terraform:  + network_id = (known after apply) 2025-05-31 15:27:18.642562 | orchestrator | 15:27:18.641 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-31 15:27:18.642565 | orchestrator | 15:27:18.641 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-31 15:27:18.642569 | orchestrator | 15:27:18.641 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.642573 | orchestrator | 15:27:18.641 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-31 15:27:18.642577 | orchestrator | 15:27:18.641 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 15:27:18.642580 | orchestrator | 15:27:18.641 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 15:27:18.642584 | orchestrator | 15:27:18.641 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-31 15:27:18.642588 | orchestrator | 15:27:18.641 STDOUT terraform:  } 2025-05-31 15:27:18.642591 | orchestrator | 15:27:18.641 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 15:27:18.642595 | orchestrator | 15:27:18.641 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-31 15:27:18.642604 | orchestrator | 15:27:18.641 STDOUT terraform:  } 2025-05-31 15:27:18.642608 | orchestrator | 15:27:18.641 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 15:27:18.642612 | orchestrator | 15:27:18.641 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-31 15:27:18.642616 | orchestrator | 15:27:18.641 STDOUT terraform:  } 2025-05-31 15:27:18.642620 | orchestrator | 15:27:18.641 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 15:27:18.642623 | orchestrator | 15:27:18.641 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-31 15:27:18.642627 | orchestrator | 15:27:18.641 STDOUT terraform:  } 2025-05-31 15:27:18.642631 | orchestrator | 15:27:18.641 STDOUT terraform:  + binding (known after apply) 2025-05-31 15:27:18.642634 | orchestrator | 15:27:18.641 STDOUT terraform:  + fixed_ip { 2025-05-31 15:27:18.642638 | orchestrator | 15:27:18.641 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-05-31 15:27:18.642642 | orchestrator | 15:27:18.641 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-31 15:27:18.642646 | orchestrator | 15:27:18.641 STDOUT terraform:  } 2025-05-31 15:27:18.642653 | orchestrator | 15:27:18.641 STDOUT terraform:  } 2025-05-31 15:27:18.642657 | orchestrator | 15:27:18.641 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-05-31 15:27:18.642660 | orchestrator | 15:27:18.641 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-31 15:27:18.642664 | orchestrator | 15:27:18.641 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-31 15:27:18.642668 | orchestrator | 15:27:18.641 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-31 15:27:18.642672 | orchestrator | 15:27:18.642 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-31 15:27:18.642675 | orchestrator | 15:27:18.642 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 15:27:18.642679 | orchestrator | 15:27:18.642 STDOUT terraform:  + device_id = (known after apply) 2025-05-31 15:27:18.642683 | orchestrator | 15:27:18.642 STDOUT terraform:  + device_owner = (known after apply) 2025-05-31 15:27:18.642686 | orchestrator | 15:27:18.642 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-31 15:27:18.642690 | orchestrator | 15:27:18.642 STDOUT terraform:  + dns_name = (known after apply) 2025-05-31 15:27:18.642696 | orchestrator | 15:27:18.642 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.642699 | orchestrator | 15:27:18.642 STDOUT terraform:  + mac_address = (known after apply) 2025-05-31 15:27:18.642703 | orchestrator | 15:27:18.642 STDOUT terraform:  + network_id = (known after apply) 2025-05-31 15:27:18.642707 | orchestrator | 15:27:18.642 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-31 15:27:18.642710 | orchestrator | 15:27:18.642 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-31 15:27:18.642714 | orchestrator | 15:27:18.642 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.642718 | orchestrator | 15:27:18.642 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-31 15:27:18.642722 | orchestrator | 15:27:18.642 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 15:27:18.642725 | orchestrator | 15:27:18.642 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 15:27:18.642729 | orchestrator | 15:27:18.642 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-31 15:27:18.642733 | orchestrator | 15:27:18.642 STDOUT terraform:  } 2025-05-31 15:27:18.642736 | orchestrator | 15:27:18.642 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 15:27:18.642740 | orchestrator | 15:27:18.642 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-31 15:27:18.642745 | orchestrator | 15:27:18.642 STDOUT terraform:  } 2025-05-31 15:27:18.642749 | orchestrator | 15:27:18.642 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 15:27:18.642753 | orchestrator | 15:27:18.642 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-31 15:27:18.642757 | orchestrator | 15:27:18.642 STDOUT terraform:  } 2025-05-31 15:27:18.642760 | orchestrator | 15:27:18.642 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 15:27:18.642764 | orchestrator | 15:27:18.642 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-31 15:27:18.642773 | orchestrator | 15:27:18.642 STDOUT terraform:  } 2025-05-31 15:27:18.642777 | orchestrator | 15:27:18.642 STDOUT terraform:  + binding (known after apply) 2025-05-31 15:27:18.642780 | orchestrator | 15:27:18.642 STDOUT terraform:  + fixed_ip { 2025-05-31 15:27:18.642786 | orchestrator | 15:27:18.642 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-05-31 15:27:18.642789 | orchestrator | 15:27:18.642 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-31 15:27:18.642795 | orchestrator | 15:27:18.642 STDOUT terraform:  } 2025-05-31 15:27:18.642800 | orchestrator | 15:27:18.642 STDOUT terraform:  } 2025-05-31 15:27:18.646123 | orchestrator | 15:27:18.642 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-05-31 15:27:18.646151 | orchestrator | 15:27:18.642 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-31 15:27:18.646157 | orchestrator | 15:27:18.642 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-31 15:27:18.646161 | orchestrator | 15:27:18.642 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-31 15:27:18.646165 | orchestrator | 15:27:18.642 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-31 15:27:18.646169 | orchestrator | 15:27:18.642 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 15:27:18.646173 | orchestrator | 15:27:18.643 STDOUT terraform:  + device_id = (known after apply) 2025-05-31 15:27:18.646176 | orchestrator | 15:27:18.643 STDOUT terraform:  + device_owner = (known after apply) 2025-05-31 15:27:18.646180 | orchestrator | 15:27:18.643 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-31 15:27:18.646184 | orchestrator | 15:27:18.643 STDOUT terraform:  + dns_name = (known after apply) 2025-05-31 15:27:18.646187 | orchestrator | 15:27:18.643 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.646191 | orchestrator | 15:27:18.643 STDOUT terraform:  + mac_address = (known after apply) 2025-05-31 15:27:18.646194 | orchestrator | 15:27:18.643 STDOUT terraform:  + network_id = (known after apply) 2025-05-31 15:27:18.646198 | orchestrator | 15:27:18.643 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-31 15:27:18.646202 | orchestrator | 15:27:18.643 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-31 15:27:18.646206 | orchestrator | 15:27:18.643 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.646210 | orchestrator | 15:27:18.643 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-31 15:27:18.646214 | orchestrator | 15:27:18.643 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 15:27:18.646217 | orchestrator | 15:27:18.644 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 15:27:18.646221 | orchestrator | 15:27:18.644 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-31 15:27:18.646225 | orchestrator | 15:27:18.644 STDOUT terraform:  } 2025-05-31 15:27:18.646228 | orchestrator | 15:27:18.644 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 15:27:18.646232 | orchestrator | 15:27:18.644 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-31 15:27:18.646244 | orchestrator | 15:27:18.644 STDOUT terraform:  } 2025-05-31 15:27:18.646248 | orchestrator | 15:27:18.644 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 15:27:18.646252 | orchestrator | 15:27:18.644 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-31 15:27:18.646255 | orchestrator | 15:27:18.644 STDOUT terraform:  } 2025-05-31 15:27:18.646259 | orchestrator | 15:27:18.644 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 15:27:18.646263 | orchestrator | 15:27:18.644 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-31 15:27:18.646266 | orchestrator | 15:27:18.644 STDOUT terraform:  } 2025-05-31 15:27:18.646270 | orchestrator | 15:27:18.644 STDOUT terraform:  + binding (known after apply) 2025-05-31 15:27:18.646274 | orchestrator | 15:27:18.644 STDOUT terraform:  + fixed_ip { 2025-05-31 15:27:18.646277 | orchestrator | 15:27:18.644 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-05-31 15:27:18.646281 | orchestrator | 15:27:18.644 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-31 15:27:18.646285 | orchestrator | 15:27:18.644 STDOUT terraform:  } 2025-05-31 15:27:18.646288 | orchestrator | 15:27:18.644 STDOUT terraform:  } 2025-05-31 15:27:18.646292 | orchestrator | 15:27:18.644 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-05-31 15:27:18.646296 | orchestrator | 15:27:18.644 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-31 15:27:18.646307 | orchestrator | 15:27:18.644 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-31 15:27:18.646311 | orchestrator | 15:27:18.644 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-31 15:27:18.646314 | orchestrator | 15:27:18.644 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-31 15:27:18.646323 | orchestrator | 15:27:18.644 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 15:27:18.646327 | orchestrator | 15:27:18.644 STDOUT terraform:  + device_id = (known after apply) 2025-05-31 15:27:18.646331 | orchestrator | 15:27:18.644 STDOUT terraform:  + device_owner = (known after apply) 2025-05-31 15:27:18.646335 | orchestrator | 15:27:18.644 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-31 15:27:18.646338 | orchestrator | 15:27:18.644 STDOUT terraform:  + dns_name = (known after apply) 2025-05-31 15:27:18.646342 | orchestrator | 15:27:18.644 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.646346 | orchestrator | 15:27:18.644 STDOUT terraform:  + mac_address = (known after apply) 2025-05-31 15:27:18.646349 | orchestrator | 15:27:18.644 STDOUT terraform:  + network_id = (known after apply) 2025-05-31 15:27:18.646353 | orchestrator | 15:27:18.644 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-31 15:27:18.646357 | orchestrator | 15:27:18.644 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-31 15:27:18.646360 | orchestrator | 15:27:18.644 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.646367 | orchestrator | 15:27:18.644 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-31 15:27:18.646375 | orchestrator | 15:27:18.644 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 15:27:18.646379 | orchestrator | 15:27:18.644 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 15:27:18.646383 | orchestrator | 15:27:18.644 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-31 15:27:18.646386 | orchestrator | 15:27:18.644 STDOUT terraform:  } 2025-05-31 15:27:18.646390 | orchestrator | 15:27:18.644 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 15:27:18.646394 | orchestrator | 15:27:18.644 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-31 15:27:18.646398 | orchestrator | 15:27:18.645 STDOUT terraform:  } 2025-05-31 15:27:18.646401 | orchestrator | 15:27:18.645 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 15:27:18.646405 | orchestrator | 15:27:18.645 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-31 15:27:18.646409 | orchestrator | 15:27:18.645 STDOUT terraform:  } 2025-05-31 15:27:18.646412 | orchestrator | 15:27:18.645 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 15:27:18.646416 | orchestrator | 15:27:18.645 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-31 15:27:18.646420 | orchestrator | 15:27:18.645 STDOUT terraform:  } 2025-05-31 15:27:18.646423 | orchestrator | 15:27:18.645 STDOUT terraform:  + binding (known after apply) 2025-05-31 15:27:18.646427 | orchestrator | 15:27:18.645 STDOUT terraform:  + fixed_ip { 2025-05-31 15:27:18.646431 | orchestrator | 15:27:18.645 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-05-31 15:27:18.646434 | orchestrator | 15:27:18.645 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-31 15:27:18.646438 | orchestrator | 15:27:18.645 STDOUT terraform:  } 2025-05-31 15:27:18.646442 | orchestrator | 15:27:18.645 STDOUT terraform:  } 2025-05-31 15:27:18.646446 | orchestrator | 15:27:18.645 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-05-31 15:27:18.646449 | orchestrator | 15:27:18.645 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-31 15:27:18.646453 | orchestrator | 15:27:18.645 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-31 15:27:18.646457 | orchestrator | 15:27:18.645 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-31 15:27:18.646463 | orchestrator | 15:27:18.645 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-31 15:27:18.646467 | orchestrator | 15:27:18.645 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 15:27:18.646471 | orchestrator | 15:27:18.645 STDOUT terraform:  + device_id = (known after apply) 2025-05-31 15:27:18.646474 | orchestrator | 15:27:18.645 STDOUT terraform:  + device_owner = (known after apply) 2025-05-31 15:27:18.646478 | orchestrator | 15:27:18.645 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-31 15:27:18.646482 | orchestrator | 15:27:18.645 STDOUT terraform:  + dns_name = (known after apply) 2025-05-31 15:27:18.646485 | orchestrator | 15:27:18.645 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.646489 | orchestrator | 15:27:18.645 STDOUT terraform:  + mac_address = (known after apply) 2025-05-31 15:27:18.646496 | orchestrator | 15:27:18.645 STDOUT terraform:  + network_id = (known after apply) 2025-05-31 15:27:18.646500 | orchestrator | 15:27:18.645 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-31 15:27:18.646503 | orchestrator | 15:27:18.645 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-31 15:27:18.646507 | orchestrator | 15:27:18.645 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.646511 | orchestrator | 15:27:18.645 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-31 15:27:18.646514 | orchestrator | 15:27:18.645 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 15:27:18.646521 | orchestrator | 15:27:18.645 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 15:27:18.646524 | orchestrator | 15:27:18.645 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-31 15:27:18.646528 | orchestrator | 15:27:18.645 STDOUT terraform:  } 2025-05-31 15:27:18.646532 | orchestrator | 15:27:18.645 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 15:27:18.646536 | orchestrator | 15:27:18.645 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-31 15:27:18.646539 | orchestrator | 15:27:18.645 STDOUT terraform:  } 2025-05-31 15:27:18.646543 | orchestrator | 15:27:18.645 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 15:27:18.646547 | orchestrator | 15:27:18.645 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-31 15:27:18.646550 | orchestrator | 15:27:18.645 STDOUT terraform:  } 2025-05-31 15:27:18.646554 | orchestrator | 15:27:18.645 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 15:27:18.646558 | orchestrator | 15:27:18.645 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-31 15:27:18.646561 | orchestrator | 15:27:18.645 STDOUT terraform:  } 2025-05-31 15:27:18.646565 | orchestrator | 15:27:18.645 STDOUT terraform:  + binding (known after apply) 2025-05-31 15:27:18.646569 | orchestrator | 15:27:18.646 STDOUT terraform:  + fixed_ip { 2025-05-31 15:27:18.646573 | orchestrator | 15:27:18.646 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-05-31 15:27:18.646576 | orchestrator | 15:27:18.646 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-31 15:27:18.646580 | orchestrator | 15:27:18.646 STDOUT terraform:  } 2025-05-31 15:27:18.646584 | orchestrator | 15:27:18.646 STDOUT terraform:  } 2025-05-31 15:27:18.646587 | orchestrator | 15:27:18.646 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-05-31 15:27:18.646591 | orchestrator | 15:27:18.646 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-05-31 15:27:18.646595 | orchestrator | 15:27:18.646 STDOUT terraform:  + force_destroy = false 2025-05-31 15:27:18.646599 | orchestrator | 15:27:18.646 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.646602 | orchestrator | 15:27:18.646 STDOUT terraform:  + port_id = (known after apply) 2025-05-31 15:27:18.646606 | orchestrator | 15:27:18.646 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.646610 | orchestrator | 15:27:18.646 STDOUT terraform:  + router_id = (known after apply) 2025-05-31 15:27:18.646619 | orchestrator | 15:27:18.646 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-31 15:27:18.646623 | orchestrator | 15:27:18.646 STDOUT terraform:  } 2025-05-31 15:27:18.646626 | orchestrator | 15:27:18.646 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-05-31 15:27:18.646630 | orchestrator | 15:27:18.646 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-05-31 15:27:18.646634 | orchestrator | 15:27:18.646 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-31 15:27:18.646638 | orchestrator | 15:27:18.646 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 15:27:18.646641 | orchestrator | 15:27:18.646 STDOUT terraform:  + availability_zone_hints = [ 2025-05-31 15:27:18.646645 | orchestrator | 15:27:18.646 STDOUT terraform:  + "nova", 2025-05-31 15:27:18.646649 | orchestrator | 15:27:18.646 STDOUT terraform:  ] 2025-05-31 15:27:18.646655 | orchestrator | 15:27:18.646 STDOUT terraform:  + distributed = (known after apply) 2025-05-31 15:27:18.646660 | orchestrator | 15:27:18.646 STDOUT terraform:  + enable_snat = (known after apply) 2025-05-31 15:27:18.646691 | orchestrator | 15:27:18.646 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-05-31 15:27:18.646728 | orchestrator | 15:27:18.646 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.646758 | orchestrator | 15:27:18.646 STDOUT terraform:  + name = "testbed" 2025-05-31 15:27:18.646795 | orchestrator | 15:27:18.646 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.646832 | orchestrator | 15:27:18.646 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 15:27:18.646860 | orchestrator | 15:27:18.646 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-05-31 15:27:18.646866 | orchestrator | 15:27:18.646 STDOUT terraform:  } 2025-05-31 15:27:18.646922 | orchestrator | 15:27:18.646 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-05-31 15:27:18.646974 | orchestrator | 15:27:18.646 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-05-31 15:27:18.646993 | orchestrator | 15:27:18.646 STDOUT terraform:  + description = "ssh" 2025-05-31 15:27:18.647045 | orchestrator | 15:27:18.646 STDOUT terraform:  + direction = "ingress" 2025-05-31 15:27:18.647051 | orchestrator | 15:27:18.647 STDOUT terraform:  + ethertype = "IPv4" 2025-05-31 15:27:18.647076 | orchestrator | 15:27:18.647 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.647096 | orchestrator | 15:27:18.647 STDOUT terraform:  + port_range_max = 22 2025-05-31 15:27:18.647116 | orchestrator | 15:27:18.647 STDOUT terraform:  + port_range_min = 22 2025-05-31 15:27:18.647137 | orchestrator | 15:27:18.647 STDOUT terraform:  + protocol = "tcp" 2025-05-31 15:27:18.647168 | orchestrator | 15:27:18.647 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.647197 | orchestrator | 15:27:18.647 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-31 15:27:18.647222 | orchestrator | 15:27:18.647 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-31 15:27:18.647253 | orchestrator | 15:27:18.647 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-31 15:27:18.647282 | orchestrator | 15:27:18.647 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 15:27:18.647296 | orchestrator | 15:27:18.647 STDOUT terraform:  } 2025-05-31 15:27:18.647351 | orchestrator | 15:27:18.647 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-05-31 15:27:18.647403 | orchestrator | 15:27:18.647 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-05-31 15:27:18.647428 | orchestrator | 15:27:18.647 STDOUT terraform:  + description = "wireguard" 2025-05-31 15:27:18.647452 | orchestrator | 15:27:18.647 STDOUT terraform:  + direction = "ingress" 2025-05-31 15:27:18.647471 | orchestrator | 15:27:18.647 STDOUT terraform:  + ethertype = "IPv4" 2025-05-31 15:27:18.647503 | orchestrator | 15:27:18.647 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.647523 | orchestrator | 15:27:18.647 STDOUT terraform:  + port_range_max = 51820 2025-05-31 15:27:18.647543 | orchestrator | 15:27:18.647 STDOUT terraform:  + port_range_min = 51820 2025-05-31 15:27:18.647563 | orchestrator | 15:27:18.647 STDOUT terraform:  + protocol = "udp" 2025-05-31 15:27:18.647594 | orchestrator | 15:27:18.647 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.647623 | orchestrator | 15:27:18.647 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-31 15:27:18.647648 | orchestrator | 15:27:18.647 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-31 15:27:18.647677 | orchestrator | 15:27:18.647 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-31 15:27:18.647708 | orchestrator | 15:27:18.647 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 15:27:18.647718 | orchestrator | 15:27:18.647 STDOUT terraform:  } 2025-05-31 15:27:18.647770 | orchestrator | 15:27:18.647 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-05-31 15:27:18.647823 | orchestrator | 15:27:18.647 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-05-31 15:27:18.647846 | orchestrator | 15:27:18.647 STDOUT terraform:  + direction = "ingress" 2025-05-31 15:27:18.647867 | orchestrator | 15:27:18.647 STDOUT terraform:  + ethertype = "IPv4" 2025-05-31 15:27:18.647899 | orchestrator | 15:27:18.647 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.647918 | orchestrator | 15:27:18.647 STDOUT terraform:  + protocol = "tcp" 2025-05-31 15:27:18.647949 | orchestrator | 15:27:18.647 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.647979 | orchestrator | 15:27:18.647 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-31 15:27:18.648026 | orchestrator | 15:27:18.647 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-31 15:27:18.648057 | orchestrator | 15:27:18.648 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-31 15:27:18.648087 | orchestrator | 15:27:18.648 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 15:27:18.648097 | orchestrator | 15:27:18.648 STDOUT terraform:  } 2025-05-31 15:27:18.648149 | orchestrator | 15:27:18.648 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-05-31 15:27:18.648202 | orchestrator | 15:27:18.648 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-05-31 15:27:18.648226 | orchestrator | 15:27:18.648 STDOUT terraform:  + direction = "ingress" 2025-05-31 15:27:18.648247 | orchestrator | 15:27:18.648 STDOUT terraform:  + ethertype = "IPv4" 2025-05-31 15:27:18.648278 | orchestrator | 15:27:18.648 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.648298 | orchestrator | 15:27:18.648 STDOUT terraform:  + protocol = "udp" 2025-05-31 15:27:18.648336 | orchestrator | 15:27:18.648 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.648366 | orchestrator | 15:27:18.648 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-31 15:27:18.648395 | orchestrator | 15:27:18.648 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-31 15:27:18.648425 | orchestrator | 15:27:18.648 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-31 15:27:18.648454 | orchestrator | 15:27:18.648 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 15:27:18.648467 | orchestrator | 15:27:18.648 STDOUT terraform:  } 2025-05-31 15:27:18.648519 | orchestrator | 15:27:18.648 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-05-31 15:27:18.648572 | orchestrator | 15:27:18.648 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-05-31 15:27:18.648596 | orchestrator | 15:27:18.648 STDOUT terraform:  + direction = "ingress" 2025-05-31 15:27:18.648617 | orchestrator | 15:27:18.648 STDOUT terraform:  + ethertype = "IPv4" 2025-05-31 15:27:18.648648 | orchestrator | 15:27:18.648 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.648669 | orchestrator | 15:27:18.648 STDOUT terraform:  + protocol = "icmp" 2025-05-31 15:27:18.648700 | orchestrator | 15:27:18.648 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.648752 | orchestrator | 15:27:18.648 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-31 15:27:18.648776 | orchestrator | 15:27:18.648 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-31 15:27:18.648806 | orchestrator | 15:27:18.648 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-31 15:27:18.648835 | orchestrator | 15:27:18.648 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 15:27:18.648842 | orchestrator | 15:27:18.648 STDOUT terraform:  } 2025-05-31 15:27:18.648895 | orchestrator | 15:27:18.648 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-05-31 15:27:18.648948 | orchestrator | 15:27:18.648 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-05-31 15:27:18.648971 | orchestrator | 15:27:18.648 STDOUT terraform:  + direction = "ingress" 2025-05-31 15:27:18.648991 | orchestrator | 15:27:18.648 STDOUT terraform:  + ethertype = "IPv4" 2025-05-31 15:27:18.649033 | orchestrator | 15:27:18.648 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.649059 | orchestrator | 15:27:18.649 STDOUT terraform:  + protocol = "tcp" 2025-05-31 15:27:18.649084 | orchestrator | 15:27:18.649 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.649114 | orchestrator | 15:27:18.649 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-31 15:27:18.649138 | orchestrator | 15:27:18.649 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-31 15:27:18.649167 | orchestrator | 15:27:18.649 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-31 15:27:18.649197 | orchestrator | 15:27:18.649 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 15:27:18.649203 | orchestrator | 15:27:18.649 STDOUT terraform:  } 2025-05-31 15:27:18.649258 | orchestrator | 15:27:18.649 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-05-31 15:27:18.649308 | orchestrator | 15:27:18.649 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-05-31 15:27:18.649332 | orchestrator | 15:27:18.649 STDOUT terraform:  + direction = "ingress" 2025-05-31 15:27:18.649353 | orchestrator | 15:27:18.649 STDOUT terraform:  + ethertype = "IPv4" 2025-05-31 15:27:18.649383 | orchestrator | 15:27:18.649 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.649404 | orchestrator | 15:27:18.649 STDOUT terraform:  + protocol = "udp" 2025-05-31 15:27:18.649435 | orchestrator | 15:27:18.649 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.655716 | orchestrator | 15:27:18.649 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-31 15:27:18.655752 | orchestrator | 15:27:18.649 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-31 15:27:18.655757 | orchestrator | 15:27:18.649 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-31 15:27:18.655761 | orchestrator | 15:27:18.649 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 15:27:18.655765 | orchestrator | 15:27:18.649 STDOUT terraform:  } 2025-05-31 15:27:18.655770 | orchestrator | 15:27:18.649 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-05-31 15:27:18.655775 | orchestrator | 15:27:18.649 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-05-31 15:27:18.655779 | orchestrator | 15:27:18.649 STDOUT terraform:  + direction = "ingress" 2025-05-31 15:27:18.655783 | orchestrator | 15:27:18.650 STDOUT terraform:  + ethertype = "IPv4" 2025-05-31 15:27:18.655786 | orchestrator | 15:27:18.650 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.655790 | orchestrator | 15:27:18.650 STDOUT terraform:  + protocol = "icmp" 2025-05-31 15:27:18.655794 | orchestrator | 15:27:18.650 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.655798 | orchestrator | 15:27:18.650 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-31 15:27:18.655801 | orchestrator | 15:27:18.650 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-31 15:27:18.655805 | orchestrator | 15:27:18.650 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-31 15:27:18.655820 | orchestrator | 15:27:18.650 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 15:27:18.655823 | orchestrator | 15:27:18.650 STDOUT terraform:  } 2025-05-31 15:27:18.655827 | orchestrator | 15:27:18.650 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-05-31 15:27:18.655831 | orchestrator | 15:27:18.650 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-05-31 15:27:18.655835 | orchestrator | 15:27:18.650 STDOUT terraform:  + description = "vrrp" 2025-05-31 15:27:18.655839 | orchestrator | 15:27:18.650 STDOUT terraform:  + direction = "ingress" 2025-05-31 15:27:18.655843 | orchestrator | 15:27:18.650 STDOUT terraform:  + ethertype = "IPv4" 2025-05-31 15:27:18.655846 | orchestrator | 15:27:18.650 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.655856 | orchestrator | 15:27:18.650 STDOUT terraform:  + protocol = "112" 2025-05-31 15:27:18.655860 | orchestrator | 15:27:18.650 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.655864 | orchestrator | 15:27:18.650 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-31 15:27:18.655867 | orchestrator | 15:27:18.650 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-31 15:27:18.655871 | orchestrator | 15:27:18.650 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-31 15:27:18.655875 | orchestrator | 15:27:18.650 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 15:27:18.655878 | orchestrator | 15:27:18.650 STDOUT terraform:  } 2025-05-31 15:27:18.655882 | orchestrator | 15:27:18.650 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-05-31 15:27:18.655886 | orchestrator | 15:27:18.650 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-05-31 15:27:18.655890 | orchestrator | 15:27:18.650 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 15:27:18.655893 | orchestrator | 15:27:18.650 STDOUT terraform:  + description = "management security group" 2025-05-31 15:27:18.655897 | orchestrator | 15:27:18.650 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.655901 | orchestrator | 15:27:18.650 STDOUT terraform:  + name = "testbed-management" 2025-05-31 15:27:18.655905 | orchestrator | 15:27:18.650 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.655916 | orchestrator | 15:27:18.650 STDOUT terraform:  + stateful = (known after apply) 2025-05-31 15:27:18.655920 | orchestrator | 15:27:18.650 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 15:27:18.655924 | orchestrator | 15:27:18.650 STDOUT terraform:  } 2025-05-31 15:27:18.655927 | orchestrator | 15:27:18.650 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-05-31 15:27:18.655931 | orchestrator | 15:27:18.655 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-05-31 15:27:18.655935 | orchestrator | 15:27:18.655 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 15:27:18.655939 | orchestrator | 15:27:18.655 STDOUT terraform:  + description = "node security group" 2025-05-31 15:27:18.655945 | orchestrator | 15:27:18.655 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.655949 | orchestrator | 15:27:18.655 STDOUT terraform:  + name = "testbed-node" 2025-05-31 15:27:18.655953 | orchestrator | 15:27:18.655 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.655956 | orchestrator | 15:27:18.655 STDOUT terraform:  + stateful = (known after apply) 2025-05-31 15:27:18.655960 | orchestrator | 15:27:18.655 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 15:27:18.655964 | orchestrator | 15:27:18.655 STDOUT terraform:  } 2025-05-31 15:27:18.655968 | orchestrator | 15:27:18.655 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-05-31 15:27:18.655971 | orchestrator | 15:27:18.655 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-05-31 15:27:18.655975 | orchestrator | 15:27:18.655 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 15:27:18.655979 | orchestrator | 15:27:18.655 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-05-31 15:27:18.656058 | orchestrator | 15:27:18.655 STDOUT terraform:  + dns_nameservers = [ 2025-05-31 15:27:18.656093 | orchestrator | 15:27:18.656 STDOUT terraform:  + "8.8.8.8", 2025-05-31 15:27:18.656121 | orchestrator | 15:27:18.656 STDOUT terraform:  + "9.9.9.9", 2025-05-31 15:27:18.656149 | orchestrator | 15:27:18.656 STDOUT terraform:  ] 2025-05-31 15:27:18.656179 | orchestrator | 15:27:18.656 STDOUT terraform:  + enable_dhcp = true 2025-05-31 15:27:18.656231 | orchestrator | 15:27:18.656 STDOUT terraform:  + gateway_ip = (known after apply) 2025-05-31 15:27:18.656776 | orchestrator | 15:27:18.656 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.656814 | orchestrator | 15:27:18.656 STDOUT terraform:  + ip_version = 4 2025-05-31 15:27:18.656855 | orchestrator | 15:27:18.656 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-05-31 15:27:18.656895 | orchestrator | 15:27:18.656 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-05-31 15:27:18.656942 | orchestrator | 15:27:18.656 STDOUT terraform:  + name = "subnet-testbed-management" 2025-05-31 15:27:18.656982 | orchestrator | 15:27:18.656 STDOUT terraform:  + network_id = (known after apply) 2025-05-31 15:27:18.657028 | orchestrator | 15:27:18.656 STDOUT terraform:  + no_gateway = false 2025-05-31 15:27:18.657082 | orchestrator | 15:27:18.657 STDOUT terraform:  + region = (known after apply) 2025-05-31 15:27:18.657121 | orchestrator | 15:27:18.657 STDOUT terraform:  + service_types = (known after apply) 2025-05-31 15:27:18.657160 | orchestrator | 15:27:18.657 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 15:27:18.657190 | orchestrator | 15:27:18.657 STDOUT terraform:  + allocation_pool { 2025-05-31 15:27:18.657226 | orchestrator | 15:27:18.657 STDOUT terraform:  + end = "192.168.31.250" 2025-05-31 15:27:18.657259 | orchestrator | 15:27:18.657 STDOUT terraform:  + start = "192.168.31.200" 2025-05-31 15:27:18.657281 | orchestrator | 15:27:18.657 STDOUT terraform:  } 2025-05-31 15:27:18.657302 | orchestrator | 15:27:18.657 STDOUT terraform:  } 2025-05-31 15:27:18.657341 | orchestrator | 15:27:18.657 STDOUT terraform:  # terraform_data.image will be created 2025-05-31 15:27:18.657403 | orchestrator | 15:27:18.657 STDOUT terraform:  + resource "terraform_data" "image" { 2025-05-31 15:27:18.657438 | orchestrator | 15:27:18.657 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.657468 | orchestrator | 15:27:18.657 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-31 15:27:18.657501 | orchestrator | 15:27:18.657 STDOUT terraform:  + output = (known after apply) 2025-05-31 15:27:18.657524 | orchestrator | 15:27:18.657 STDOUT terraform:  } 2025-05-31 15:27:18.657561 | orchestrator | 15:27:18.657 STDOUT terraform:  # terraform_data.image_node will be created 2025-05-31 15:27:18.657599 | orchestrator | 15:27:18.657 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-05-31 15:27:18.657632 | orchestrator | 15:27:18.657 STDOUT terraform:  + id = (known after apply) 2025-05-31 15:27:18.657661 | orchestrator | 15:27:18.657 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-31 15:27:18.657699 | orchestrator | 15:27:18.657 STDOUT terraform:  + output = (known after apply) 2025-05-31 15:27:18.657720 | orchestrator | 15:27:18.657 STDOUT terraform:  } 2025-05-31 15:27:18.657759 | orchestrator | 15:27:18.657 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-05-31 15:27:18.657780 | orchestrator | 15:27:18.657 STDOUT terraform: Changes to Outputs: 2025-05-31 15:27:18.657812 | orchestrator | 15:27:18.657 STDOUT terraform:  + manager_address = (sensitive value) 2025-05-31 15:27:18.657844 | orchestrator | 15:27:18.657 STDOUT terraform:  + private_key = (sensitive value) 2025-05-31 15:27:18.879765 | orchestrator | 15:27:18.879 STDOUT terraform: terraform_data.image: Creating... 2025-05-31 15:27:18.879844 | orchestrator | 15:27:18.879 STDOUT terraform: terraform_data.image_node: Creating... 2025-05-31 15:27:18.879855 | orchestrator | 15:27:18.879 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=6436c39f-615e-acac-18ea-efcd68a0a5f2] 2025-05-31 15:27:18.881153 | orchestrator | 15:27:18.880 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=56934be0-595d-f8a4-e95f-c7f1b5db7e4c] 2025-05-31 15:27:18.903110 | orchestrator | 15:27:18.902 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-05-31 15:27:18.903346 | orchestrator | 15:27:18.903 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-05-31 15:27:18.910078 | orchestrator | 15:27:18.909 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-05-31 15:27:18.910680 | orchestrator | 15:27:18.910 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-05-31 15:27:18.911746 | orchestrator | 15:27:18.911 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-05-31 15:27:18.912699 | orchestrator | 15:27:18.912 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-05-31 15:27:18.915540 | orchestrator | 15:27:18.915 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-05-31 15:27:18.917775 | orchestrator | 15:27:18.917 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-05-31 15:27:18.917943 | orchestrator | 15:27:18.917 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-05-31 15:27:18.919395 | orchestrator | 15:27:18.919 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-05-31 15:27:19.342465 | orchestrator | 15:27:19.342 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-31 15:27:19.349538 | orchestrator | 15:27:19.348 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-31 15:27:19.351604 | orchestrator | 15:27:19.351 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-05-31 15:27:19.356932 | orchestrator | 15:27:19.356 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-05-31 15:27:19.391141 | orchestrator | 15:27:19.390 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-05-31 15:27:19.399146 | orchestrator | 15:27:19.398 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-05-31 15:27:24.987485 | orchestrator | 15:27:24.987 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=cc116e76-5868-4ac9-8c8b-20df0af709d6] 2025-05-31 15:27:24.998803 | orchestrator | 15:27:24.998 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-05-31 15:27:28.913905 | orchestrator | 15:27:28.913 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-05-31 15:27:28.914086 | orchestrator | 15:27:28.913 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-05-31 15:27:28.917158 | orchestrator | 15:27:28.916 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-05-31 15:27:28.919282 | orchestrator | 15:27:28.919 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-05-31 15:27:28.920401 | orchestrator | 15:27:28.920 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-05-31 15:27:28.920539 | orchestrator | 15:27:28.920 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-05-31 15:27:29.352354 | orchestrator | 15:27:29.352 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-05-31 15:27:29.358676 | orchestrator | 15:27:29.358 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-05-31 15:27:29.400892 | orchestrator | 15:27:29.400 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-05-31 15:27:29.536973 | orchestrator | 15:27:29.536 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 11s [id=f41ad415-f570-4f0b-8f25-7db49ff0cbfa] 2025-05-31 15:27:29.546311 | orchestrator | 15:27:29.545 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-05-31 15:27:29.558350 | orchestrator | 15:27:29.558 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 11s [id=dcaed5f8-6e74-4dcd-98b8-10fdfb9502ae] 2025-05-31 15:27:29.562425 | orchestrator | 15:27:29.562 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-05-31 15:27:29.568652 | orchestrator | 15:27:29.568 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 11s [id=42e25697-c4c6-4260-bea6-0d0d8bf43604] 2025-05-31 15:27:29.573332 | orchestrator | 15:27:29.573 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-05-31 15:27:29.575670 | orchestrator | 15:27:29.575 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 11s [id=5b4bcfff-b004-43b2-aa93-003eb1863ed5] 2025-05-31 15:27:29.579971 | orchestrator | 15:27:29.579 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-05-31 15:27:29.624279 | orchestrator | 15:27:29.623 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 11s [id=433a7fb9-b2f1-41fe-ae71-e8a6b0aff1da] 2025-05-31 15:27:29.631957 | orchestrator | 15:27:29.631 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 11s [id=88f31b31-d9a0-4986-b3f2-c890facc2af6] 2025-05-31 15:27:29.633040 | orchestrator | 15:27:29.632 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-05-31 15:27:29.639893 | orchestrator | 15:27:29.639 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-05-31 15:27:29.660470 | orchestrator | 15:27:29.660 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 11s [id=35c478e1-8eef-4047-84ea-c6dce0624e72] 2025-05-31 15:27:29.672347 | orchestrator | 15:27:29.672 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-05-31 15:27:29.680401 | orchestrator | 15:27:29.680 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=e95e2a72f9153a63dc63b76a9dc73083c14abac8] 2025-05-31 15:27:29.684733 | orchestrator | 15:27:29.684 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 11s [id=ddb9130a-4326-47f1-9e34-5e5625e80e81] 2025-05-31 15:27:29.688990 | orchestrator | 15:27:29.688 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 11s [id=86312f09-330b-437e-8315-0e2c008d5fbe] 2025-05-31 15:27:29.691299 | orchestrator | 15:27:29.691 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-05-31 15:27:29.691806 | orchestrator | 15:27:29.691 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-05-31 15:27:29.697496 | orchestrator | 15:27:29.697 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=161c3a29cd6ec518d6eab96431f20349bfb0db78] 2025-05-31 15:27:35.002232 | orchestrator | 15:27:35.001 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-05-31 15:27:35.307279 | orchestrator | 15:27:35.306 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 10s [id=ce505b94-3a05-45a8-8440-80c284efa891] 2025-05-31 15:27:35.579704 | orchestrator | 15:27:35.579 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=6fd06088-b097-40ec-9540-7e6c9235c46e] 2025-05-31 15:27:35.588938 | orchestrator | 15:27:35.588 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-05-31 15:27:39.547286 | orchestrator | 15:27:39.546 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-05-31 15:27:39.563529 | orchestrator | 15:27:39.563 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-05-31 15:27:39.574948 | orchestrator | 15:27:39.574 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-05-31 15:27:39.581107 | orchestrator | 15:27:39.580 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-05-31 15:27:39.634715 | orchestrator | 15:27:39.634 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-05-31 15:27:39.640976 | orchestrator | 15:27:39.640 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-05-31 15:27:39.961283 | orchestrator | 15:27:39.960 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 10s [id=f4907d85-21a9-4777-b71a-1559c505de70] 2025-05-31 15:27:39.967416 | orchestrator | 15:27:39.967 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 10s [id=934f4ed8-ee22-4056-bf57-9bb86fd501b4] 2025-05-31 15:27:40.006223 | orchestrator | 15:27:40.005 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=ec58de5b-e589-4299-a3da-23bfa2e4200b] 2025-05-31 15:27:40.015508 | orchestrator | 15:27:40.015 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 10s [id=4c2c2026-b533-49c5-9163-99a4bbff9cf3] 2025-05-31 15:27:40.030931 | orchestrator | 15:27:40.030 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 10s [id=5c5a5f97-37d9-439f-bed4-14a1c4ce88ff] 2025-05-31 15:27:40.056128 | orchestrator | 15:27:40.055 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 10s [id=e1159f44-add9-48c4-ad36-81abc037f539] 2025-05-31 15:27:43.647865 | orchestrator | 15:27:43.647 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 8s [id=90b40fb9-f83c-4625-bb7f-825e2f9707e2] 2025-05-31 15:27:43.657428 | orchestrator | 15:27:43.656 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-05-31 15:27:43.661004 | orchestrator | 15:27:43.660 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-05-31 15:27:43.669400 | orchestrator | 15:27:43.668 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-05-31 15:27:43.860117 | orchestrator | 15:27:43.859 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=7f3bb881-1837-4d5e-bab3-8d001c8a690d] 2025-05-31 15:27:43.868773 | orchestrator | 15:27:43.868 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-05-31 15:27:43.870693 | orchestrator | 15:27:43.870 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-05-31 15:27:43.871175 | orchestrator | 15:27:43.870 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-05-31 15:27:43.872977 | orchestrator | 15:27:43.872 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-05-31 15:27:43.874941 | orchestrator | 15:27:43.874 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-05-31 15:27:43.877051 | orchestrator | 15:27:43.876 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-05-31 15:27:43.896349 | orchestrator | 15:27:43.896 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=e2ce9e6d-3f1f-4c30-8114-1a144e5a0aa1] 2025-05-31 15:27:43.901851 | orchestrator | 15:27:43.901 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-05-31 15:27:43.901935 | orchestrator | 15:27:43.901 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-05-31 15:27:43.902731 | orchestrator | 15:27:43.902 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-05-31 15:27:44.063124 | orchestrator | 15:27:44.062 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=e58a5c84-f6cb-4411-8b69-21b8653aff09] 2025-05-31 15:27:44.080477 | orchestrator | 15:27:44.080 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-05-31 15:27:44.236483 | orchestrator | 15:27:44.236 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=236eec50-4f62-409f-b69f-a06e082e7694] 2025-05-31 15:27:44.243258 | orchestrator | 15:27:44.242 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-05-31 15:27:44.493289 | orchestrator | 15:27:44.492 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=c49ce217-f6dc-4d23-bec8-6b6d683b5e7b] 2025-05-31 15:27:44.507534 | orchestrator | 15:27:44.507 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-05-31 15:27:44.538419 | orchestrator | 15:27:44.537 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=ac7e158d-4f7d-45d9-98e1-310cd813add1] 2025-05-31 15:27:44.554001 | orchestrator | 15:27:44.553 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-05-31 15:27:44.676998 | orchestrator | 15:27:44.676 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=de561134-4b7b-4364-ad2d-420527a2101d] 2025-05-31 15:27:44.687011 | orchestrator | 15:27:44.686 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=f96799e2-1b9d-46d9-8a0f-9020a2335400] 2025-05-31 15:27:44.693971 | orchestrator | 15:27:44.693 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-05-31 15:27:44.698071 | orchestrator | 15:27:44.697 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-05-31 15:27:44.857449 | orchestrator | 15:27:44.857 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=e7e3438a-349e-48b5-9efc-3dc1b4a1a1b0] 2025-05-31 15:27:44.870372 | orchestrator | 15:27:44.870 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-05-31 15:27:44.910109 | orchestrator | 15:27:44.909 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=44dd7f1e-ecc4-4738-be72-b9afa9a1703c] 2025-05-31 15:27:45.061335 | orchestrator | 15:27:45.060 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=70fe1901-386c-4946-a636-156652a844a7] 2025-05-31 15:27:49.603697 | orchestrator | 15:27:49.603 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 6s [id=05e5112f-46cc-4885-a898-4f8075927728] 2025-05-31 15:27:49.943849 | orchestrator | 15:27:49.943 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=e92c3cdf-be68-4502-8bf9-e759fca73528] 2025-05-31 15:27:50.128573 | orchestrator | 15:27:50.128 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 5s [id=40372515-66e1-4ef4-b683-07d7d58556a9] 2025-05-31 15:27:50.640019 | orchestrator | 15:27:50.639 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 6s [id=aca20198-4a2a-4050-8f54-ec36c49b2ca7] 2025-05-31 15:27:50.715877 | orchestrator | 15:27:50.715 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=f8be698a-cfd1-4e06-92d2-36dcd4637ca9] 2025-05-31 15:27:50.789553 | orchestrator | 15:27:50.789 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=bb9015a2-53f4-4a54-8330-c386abf13ac2] 2025-05-31 15:27:51.322361 | orchestrator | 15:27:51.321 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=94b6b02d-7412-4c9c-a9c3-582490fcce2a] 2025-05-31 15:27:51.401420 | orchestrator | 15:27:51.401 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 7s [id=3707480e-d649-4e43-9c6f-b4ceed0d6371] 2025-05-31 15:27:51.428405 | orchestrator | 15:27:51.428 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-05-31 15:27:51.436494 | orchestrator | 15:27:51.436 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-05-31 15:27:51.440017 | orchestrator | 15:27:51.439 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-05-31 15:27:51.440430 | orchestrator | 15:27:51.440 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-05-31 15:27:51.448045 | orchestrator | 15:27:51.447 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-05-31 15:27:51.458776 | orchestrator | 15:27:51.458 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-05-31 15:27:51.458826 | orchestrator | 15:27:51.458 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-05-31 15:27:58.161958 | orchestrator | 15:27:58.161 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 7s [id=286a1501-f63d-462d-8606-c7d87db4a800] 2025-05-31 15:27:58.171619 | orchestrator | 15:27:58.171 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-05-31 15:27:58.175724 | orchestrator | 15:27:58.175 STDOUT terraform: local_file.inventory: Creating... 2025-05-31 15:27:58.178154 | orchestrator | 15:27:58.178 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-05-31 15:27:58.185649 | orchestrator | 15:27:58.185 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=686fa4310e8a7fe74793d11453581c9be5778fe3] 2025-05-31 15:27:58.187910 | orchestrator | 15:27:58.187 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=3f6cdf1ef30c5f8008aacf3404e00b0ecb3a37aa] 2025-05-31 15:27:58.844798 | orchestrator | 15:27:58.844 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=286a1501-f63d-462d-8606-c7d87db4a800] 2025-05-31 15:28:01.441217 | orchestrator | 15:28:01.440 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-05-31 15:28:01.441320 | orchestrator | 15:28:01.441 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-05-31 15:28:01.443282 | orchestrator | 15:28:01.443 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-05-31 15:28:01.458633 | orchestrator | 15:28:01.458 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-05-31 15:28:01.458751 | orchestrator | 15:28:01.458 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-05-31 15:28:01.459795 | orchestrator | 15:28:01.459 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-05-31 15:28:11.442456 | orchestrator | 15:28:11.442 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-05-31 15:28:11.442578 | orchestrator | 15:28:11.442 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-05-31 15:28:11.443320 | orchestrator | 15:28:11.443 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-05-31 15:28:11.459914 | orchestrator | 15:28:11.459 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-05-31 15:28:11.460003 | orchestrator | 15:28:11.459 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-05-31 15:28:11.460937 | orchestrator | 15:28:11.460 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-05-31 15:28:21.443133 | orchestrator | 15:28:21.442 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-05-31 15:28:21.443389 | orchestrator | 15:28:21.442 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-05-31 15:28:21.443981 | orchestrator | 15:28:21.443 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-05-31 15:28:21.460299 | orchestrator | 15:28:21.459 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-05-31 15:28:21.460417 | orchestrator | 15:28:21.460 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-05-31 15:28:21.461342 | orchestrator | 15:28:21.461 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-05-31 15:28:21.761322 | orchestrator | 15:28:21.760 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=b6bac38d-c324-471f-93c1-6b3d335f1b76] 2025-05-31 15:28:21.979302 | orchestrator | 15:28:21.978 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=ae8645d2-d915-47ad-b34e-91695516b9da] 2025-05-31 15:28:22.211045 | orchestrator | 15:28:22.210 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=2eb26cd5-d00f-4dd8-a291-64d22f9cca73] 2025-05-31 15:28:22.236397 | orchestrator | 15:28:22.236 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=f7ed685b-ca83-4a6a-9fd9-1e2b0ac48bbf] 2025-05-31 15:28:22.238973 | orchestrator | 15:28:22.238 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=d6b496a7-7ae3-40c3-8008-771ab3b37164] 2025-05-31 15:28:31.461491 | orchestrator | 15:28:31.461 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2025-05-31 15:28:32.317092 | orchestrator | 15:28:32.316 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 41s [id=0c0c6941-b28d-4c2a-8462-f5bdb7e8b560] 2025-05-31 15:28:32.339947 | orchestrator | 15:28:32.339 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-05-31 15:28:32.348632 | orchestrator | 15:28:32.348 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=7550682540885352600] 2025-05-31 15:28:32.357688 | orchestrator | 15:28:32.357 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-05-31 15:28:32.357762 | orchestrator | 15:28:32.357 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-05-31 15:28:32.357772 | orchestrator | 15:28:32.357 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-05-31 15:28:32.358181 | orchestrator | 15:28:32.357 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-05-31 15:28:32.358207 | orchestrator | 15:28:32.357 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-05-31 15:28:32.358215 | orchestrator | 15:28:32.358 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-05-31 15:28:32.361191 | orchestrator | 15:28:32.360 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-05-31 15:28:32.366617 | orchestrator | 15:28:32.366 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-05-31 15:28:32.382119 | orchestrator | 15:28:32.381 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-05-31 15:28:32.395399 | orchestrator | 15:28:32.395 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-05-31 15:28:37.710675 | orchestrator | 15:28:37.710 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 6s [id=ae8645d2-d915-47ad-b34e-91695516b9da/86312f09-330b-437e-8315-0e2c008d5fbe] 2025-05-31 15:28:37.712359 | orchestrator | 15:28:37.711 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 6s [id=2eb26cd5-d00f-4dd8-a291-64d22f9cca73/433a7fb9-b2f1-41fe-ae71-e8a6b0aff1da] 2025-05-31 15:28:37.724979 | orchestrator | 15:28:37.724 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 6s [id=f7ed685b-ca83-4a6a-9fd9-1e2b0ac48bbf/f41ad415-f570-4f0b-8f25-7db49ff0cbfa] 2025-05-31 15:28:37.746404 | orchestrator | 15:28:37.746 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 6s [id=ae8645d2-d915-47ad-b34e-91695516b9da/42e25697-c4c6-4260-bea6-0d0d8bf43604] 2025-05-31 15:28:37.756342 | orchestrator | 15:28:37.756 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 6s [id=2eb26cd5-d00f-4dd8-a291-64d22f9cca73/35c478e1-8eef-4047-84ea-c6dce0624e72] 2025-05-31 15:28:37.765620 | orchestrator | 15:28:37.765 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 6s [id=f7ed685b-ca83-4a6a-9fd9-1e2b0ac48bbf/5b4bcfff-b004-43b2-aa93-003eb1863ed5] 2025-05-31 15:28:37.798559 | orchestrator | 15:28:37.798 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 6s [id=f7ed685b-ca83-4a6a-9fd9-1e2b0ac48bbf/ddb9130a-4326-47f1-9e34-5e5625e80e81] 2025-05-31 15:28:37.800030 | orchestrator | 15:28:37.799 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 6s [id=ae8645d2-d915-47ad-b34e-91695516b9da/88f31b31-d9a0-4986-b3f2-c890facc2af6] 2025-05-31 15:28:37.824329 | orchestrator | 15:28:37.823 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 6s [id=2eb26cd5-d00f-4dd8-a291-64d22f9cca73/dcaed5f8-6e74-4dcd-98b8-10fdfb9502ae] 2025-05-31 15:28:42.396412 | orchestrator | 15:28:42.396 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-05-31 15:28:52.396879 | orchestrator | 15:28:52.396 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-05-31 15:28:52.986900 | orchestrator | 15:28:52.986 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=50aadb3b-51e4-4c43-a12a-846e8b80ebe4] 2025-05-31 15:28:53.014865 | orchestrator | 15:28:53.014 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-05-31 15:28:53.014972 | orchestrator | 15:28:53.014 STDOUT terraform: Outputs: 2025-05-31 15:28:53.014995 | orchestrator | 15:28:53.014 STDOUT terraform: manager_address = 2025-05-31 15:28:53.015014 | orchestrator | 15:28:53.014 STDOUT terraform: private_key = 2025-05-31 15:28:53.390724 | orchestrator | ok: Runtime: 0:01:43.787746 2025-05-31 15:28:53.427159 | 2025-05-31 15:28:53.427305 | TASK [Fetch manager address] 2025-05-31 15:28:53.864560 | orchestrator | ok 2025-05-31 15:28:53.872374 | 2025-05-31 15:28:53.872496 | TASK [Set manager_host address] 2025-05-31 15:28:53.953261 | orchestrator | ok 2025-05-31 15:28:53.962563 | 2025-05-31 15:28:53.962707 | LOOP [Update ansible collections] 2025-05-31 15:28:54.861174 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-31 15:28:54.861718 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-05-31 15:28:54.861816 | orchestrator | Starting galaxy collection install process 2025-05-31 15:28:54.861858 | orchestrator | Process install dependency map 2025-05-31 15:28:54.861894 | orchestrator | Starting collection install process 2025-05-31 15:28:54.861927 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons' 2025-05-31 15:28:54.861965 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons 2025-05-31 15:28:54.862004 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-05-31 15:28:54.862088 | orchestrator | ok: Item: commons Runtime: 0:00:00.568014 2025-05-31 15:28:55.739002 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-05-31 15:28:55.739266 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-31 15:28:55.739353 | orchestrator | Starting galaxy collection install process 2025-05-31 15:28:55.739420 | orchestrator | Process install dependency map 2025-05-31 15:28:55.739482 | orchestrator | Starting collection install process 2025-05-31 15:28:55.739539 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services' 2025-05-31 15:28:55.739594 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services 2025-05-31 15:28:55.739647 | orchestrator | osism.services:999.0.0 was installed successfully 2025-05-31 15:28:55.739734 | orchestrator | ok: Item: services Runtime: 0:00:00.617760 2025-05-31 15:28:55.760097 | 2025-05-31 15:28:55.760284 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-31 15:29:06.335943 | orchestrator | ok 2025-05-31 15:29:06.344133 | 2025-05-31 15:29:06.344258 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-31 15:30:06.392330 | orchestrator | ok 2025-05-31 15:30:06.403043 | 2025-05-31 15:30:06.403207 | TASK [Fetch manager ssh hostkey] 2025-05-31 15:30:08.001011 | orchestrator | Output suppressed because no_log was given 2025-05-31 15:30:08.017376 | 2025-05-31 15:30:08.017570 | TASK [Get ssh keypair from terraform environment] 2025-05-31 15:30:08.559213 | orchestrator | ok: Runtime: 0:00:00.011138 2025-05-31 15:30:08.575460 | 2025-05-31 15:30:08.575659 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-31 15:30:08.624168 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-05-31 15:30:08.634360 | 2025-05-31 15:30:08.634500 | TASK [Run manager part 0] 2025-05-31 15:30:09.782329 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-31 15:30:09.827471 | orchestrator | 2025-05-31 15:30:09.827520 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-05-31 15:30:09.827528 | orchestrator | 2025-05-31 15:30:09.827540 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-05-31 15:30:11.367069 | orchestrator | ok: [testbed-manager] 2025-05-31 15:30:11.367178 | orchestrator | 2025-05-31 15:30:11.367226 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-31 15:30:11.367249 | orchestrator | 2025-05-31 15:30:11.367270 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-31 15:30:13.193737 | orchestrator | ok: [testbed-manager] 2025-05-31 15:30:13.193811 | orchestrator | 2025-05-31 15:30:13.193825 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-31 15:30:13.800378 | orchestrator | ok: [testbed-manager] 2025-05-31 15:30:13.800559 | orchestrator | 2025-05-31 15:30:13.800581 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-31 15:30:13.848185 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:30:13.848232 | orchestrator | 2025-05-31 15:30:13.848242 | orchestrator | TASK [Update package cache] **************************************************** 2025-05-31 15:30:13.885001 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:30:13.885052 | orchestrator | 2025-05-31 15:30:13.885060 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-31 15:30:13.914780 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:30:13.914831 | orchestrator | 2025-05-31 15:30:13.914839 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-31 15:30:13.949592 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:30:13.949647 | orchestrator | 2025-05-31 15:30:13.949655 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-31 15:30:13.981739 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:30:13.981781 | orchestrator | 2025-05-31 15:30:13.981789 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-05-31 15:30:14.015673 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:30:14.015752 | orchestrator | 2025-05-31 15:30:14.015769 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-05-31 15:30:14.044194 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:30:14.044241 | orchestrator | 2025-05-31 15:30:14.044247 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-05-31 15:30:14.833431 | orchestrator | changed: [testbed-manager] 2025-05-31 15:30:14.833492 | orchestrator | 2025-05-31 15:30:14.833500 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-05-31 15:33:10.729113 | orchestrator | changed: [testbed-manager] 2025-05-31 15:33:10.729265 | orchestrator | 2025-05-31 15:33:10.729275 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-31 15:34:23.846279 | orchestrator | changed: [testbed-manager] 2025-05-31 15:34:23.846379 | orchestrator | 2025-05-31 15:34:23.846392 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-31 15:34:48.273106 | orchestrator | changed: [testbed-manager] 2025-05-31 15:34:48.273199 | orchestrator | 2025-05-31 15:34:48.273213 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-31 15:34:56.714681 | orchestrator | changed: [testbed-manager] 2025-05-31 15:34:56.714784 | orchestrator | 2025-05-31 15:34:56.714803 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-31 15:34:56.767493 | orchestrator | ok: [testbed-manager] 2025-05-31 15:34:56.767580 | orchestrator | 2025-05-31 15:34:56.767590 | orchestrator | TASK [Get current user] ******************************************************** 2025-05-31 15:34:57.565210 | orchestrator | ok: [testbed-manager] 2025-05-31 15:34:57.565296 | orchestrator | 2025-05-31 15:34:57.565348 | orchestrator | TASK [Create venv directory] *************************************************** 2025-05-31 15:34:58.340411 | orchestrator | changed: [testbed-manager] 2025-05-31 15:34:58.340561 | orchestrator | 2025-05-31 15:34:58.340579 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-05-31 15:35:05.041760 | orchestrator | changed: [testbed-manager] 2025-05-31 15:35:05.041899 | orchestrator | 2025-05-31 15:35:05.041949 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-05-31 15:35:11.311523 | orchestrator | changed: [testbed-manager] 2025-05-31 15:35:11.311574 | orchestrator | 2025-05-31 15:35:11.311587 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-05-31 15:35:14.039394 | orchestrator | changed: [testbed-manager] 2025-05-31 15:35:14.039451 | orchestrator | 2025-05-31 15:35:14.039465 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-05-31 15:35:15.946442 | orchestrator | changed: [testbed-manager] 2025-05-31 15:35:15.946502 | orchestrator | 2025-05-31 15:35:15.946515 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-05-31 15:35:17.083613 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-31 15:35:17.083727 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-31 15:35:17.083752 | orchestrator | 2025-05-31 15:35:17.083768 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-05-31 15:35:17.125293 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-31 15:35:17.125430 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-31 15:35:17.125452 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-31 15:35:17.125472 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-31 15:35:22.191923 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-31 15:35:22.191971 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-31 15:35:22.191979 | orchestrator | 2025-05-31 15:35:22.191987 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-05-31 15:35:22.779093 | orchestrator | changed: [testbed-manager] 2025-05-31 15:35:22.779144 | orchestrator | 2025-05-31 15:35:22.779153 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-05-31 15:39:48.328796 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-05-31 15:39:48.328879 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-05-31 15:39:48.328912 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-05-31 15:39:48.328925 | orchestrator | 2025-05-31 15:39:48.328938 | orchestrator | TASK [Install local collections] *********************************************** 2025-05-31 15:39:50.604014 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-05-31 15:39:50.604082 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-05-31 15:39:50.604096 | orchestrator | 2025-05-31 15:39:50.604109 | orchestrator | PLAY [Create operator user] **************************************************** 2025-05-31 15:39:50.604120 | orchestrator | 2025-05-31 15:39:50.604132 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-31 15:39:52.015619 | orchestrator | ok: [testbed-manager] 2025-05-31 15:39:52.015702 | orchestrator | 2025-05-31 15:39:52.015713 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-31 15:39:52.061061 | orchestrator | ok: [testbed-manager] 2025-05-31 15:39:52.061120 | orchestrator | 2025-05-31 15:39:52.061132 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-31 15:39:52.137019 | orchestrator | ok: [testbed-manager] 2025-05-31 15:39:52.137071 | orchestrator | 2025-05-31 15:39:52.137082 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-31 15:39:52.856432 | orchestrator | changed: [testbed-manager] 2025-05-31 15:39:52.856529 | orchestrator | 2025-05-31 15:39:52.856545 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-31 15:39:53.567895 | orchestrator | changed: [testbed-manager] 2025-05-31 15:39:53.567991 | orchestrator | 2025-05-31 15:39:53.568007 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-31 15:39:54.906208 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-05-31 15:39:54.906323 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-05-31 15:39:54.906340 | orchestrator | 2025-05-31 15:39:54.906367 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-31 15:39:56.238667 | orchestrator | changed: [testbed-manager] 2025-05-31 15:39:56.238810 | orchestrator | 2025-05-31 15:39:56.238827 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-31 15:39:57.946093 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-05-31 15:39:57.946277 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-05-31 15:39:57.946294 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-05-31 15:39:57.946307 | orchestrator | 2025-05-31 15:39:57.946320 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-31 15:39:58.497561 | orchestrator | changed: [testbed-manager] 2025-05-31 15:39:58.498322 | orchestrator | 2025-05-31 15:39:58.498349 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-31 15:39:58.567294 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:39:58.567346 | orchestrator | 2025-05-31 15:39:58.567353 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-31 15:39:59.438264 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-31 15:39:59.438364 | orchestrator | changed: [testbed-manager] 2025-05-31 15:39:59.438380 | orchestrator | 2025-05-31 15:39:59.438393 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-31 15:39:59.480198 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:39:59.480287 | orchestrator | 2025-05-31 15:39:59.480304 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-31 15:39:59.518557 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:39:59.518633 | orchestrator | 2025-05-31 15:39:59.518648 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-31 15:39:59.559120 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:39:59.559204 | orchestrator | 2025-05-31 15:39:59.559220 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-31 15:39:59.605551 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:39:59.605646 | orchestrator | 2025-05-31 15:39:59.605661 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-31 15:40:00.373483 | orchestrator | ok: [testbed-manager] 2025-05-31 15:40:00.373573 | orchestrator | 2025-05-31 15:40:00.373589 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-31 15:40:00.373602 | orchestrator | 2025-05-31 15:40:00.373616 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-31 15:40:01.762439 | orchestrator | ok: [testbed-manager] 2025-05-31 15:40:01.762508 | orchestrator | 2025-05-31 15:40:01.762525 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-05-31 15:40:02.722459 | orchestrator | changed: [testbed-manager] 2025-05-31 15:40:02.722540 | orchestrator | 2025-05-31 15:40:02.722556 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 15:40:02.722572 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-05-31 15:40:02.722585 | orchestrator | 2025-05-31 15:40:03.025853 | orchestrator | ok: Runtime: 0:09:53.866853 2025-05-31 15:40:03.044136 | 2025-05-31 15:40:03.044340 | TASK [Point out that the log in on the manager is now possible] 2025-05-31 15:40:03.091845 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-05-31 15:40:03.101698 | 2025-05-31 15:40:03.101823 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-31 15:40:03.143513 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-05-31 15:40:03.151493 | 2025-05-31 15:40:03.151618 | TASK [Run manager part 1 + 2] 2025-05-31 15:40:04.017148 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-31 15:40:04.085498 | orchestrator | 2025-05-31 15:40:04.085607 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-05-31 15:40:04.085623 | orchestrator | 2025-05-31 15:40:04.085653 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-31 15:40:07.005758 | orchestrator | ok: [testbed-manager] 2025-05-31 15:40:07.005862 | orchestrator | 2025-05-31 15:40:07.005922 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-31 15:40:07.043473 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:40:07.043528 | orchestrator | 2025-05-31 15:40:07.043539 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-31 15:40:07.082117 | orchestrator | ok: [testbed-manager] 2025-05-31 15:40:07.082159 | orchestrator | 2025-05-31 15:40:07.082167 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-31 15:40:07.132397 | orchestrator | ok: [testbed-manager] 2025-05-31 15:40:07.132435 | orchestrator | 2025-05-31 15:40:07.132444 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-31 15:40:07.204337 | orchestrator | ok: [testbed-manager] 2025-05-31 15:40:07.204393 | orchestrator | 2025-05-31 15:40:07.204409 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-31 15:40:07.274147 | orchestrator | ok: [testbed-manager] 2025-05-31 15:40:07.274192 | orchestrator | 2025-05-31 15:40:07.274208 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-31 15:40:07.312281 | orchestrator | included: /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-05-31 15:40:07.312326 | orchestrator | 2025-05-31 15:40:07.312340 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-31 15:40:08.014909 | orchestrator | ok: [testbed-manager] 2025-05-31 15:40:08.014982 | orchestrator | 2025-05-31 15:40:08.015000 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-31 15:40:08.071830 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:40:08.071885 | orchestrator | 2025-05-31 15:40:08.071899 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-31 15:40:09.487143 | orchestrator | changed: [testbed-manager] 2025-05-31 15:40:09.487203 | orchestrator | 2025-05-31 15:40:09.487217 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-31 15:40:10.041579 | orchestrator | ok: [testbed-manager] 2025-05-31 15:40:10.041664 | orchestrator | 2025-05-31 15:40:10.041725 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-31 15:40:11.175883 | orchestrator | changed: [testbed-manager] 2025-05-31 15:40:11.175943 | orchestrator | 2025-05-31 15:40:11.175958 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-31 15:40:23.869678 | orchestrator | changed: [testbed-manager] 2025-05-31 15:40:23.869798 | orchestrator | 2025-05-31 15:40:23.869813 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-31 15:40:24.573183 | orchestrator | ok: [testbed-manager] 2025-05-31 15:40:24.573318 | orchestrator | 2025-05-31 15:40:24.573337 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-31 15:40:24.624258 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:40:24.624374 | orchestrator | 2025-05-31 15:40:24.624391 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-05-31 15:40:25.547358 | orchestrator | changed: [testbed-manager] 2025-05-31 15:40:25.547450 | orchestrator | 2025-05-31 15:40:25.547467 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-05-31 15:40:26.471332 | orchestrator | changed: [testbed-manager] 2025-05-31 15:40:26.471419 | orchestrator | 2025-05-31 15:40:26.471433 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-05-31 15:40:27.031682 | orchestrator | changed: [testbed-manager] 2025-05-31 15:40:27.031804 | orchestrator | 2025-05-31 15:40:27.031822 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-05-31 15:40:27.072569 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-31 15:40:27.072678 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-31 15:40:27.072694 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-31 15:40:27.072706 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-31 15:40:30.113341 | orchestrator | changed: [testbed-manager] 2025-05-31 15:40:30.113441 | orchestrator | 2025-05-31 15:40:30.113459 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-05-31 15:40:38.754753 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-05-31 15:40:38.754848 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-05-31 15:40:38.754867 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-05-31 15:40:38.754880 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-05-31 15:40:38.754898 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-05-31 15:40:38.754910 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-05-31 15:40:38.754921 | orchestrator | 2025-05-31 15:40:38.754934 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-05-31 15:40:39.763696 | orchestrator | changed: [testbed-manager] 2025-05-31 15:40:39.763771 | orchestrator | 2025-05-31 15:40:39.763780 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-05-31 15:40:39.808007 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:40:39.808063 | orchestrator | 2025-05-31 15:40:39.808072 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-05-31 15:40:42.853539 | orchestrator | changed: [testbed-manager] 2025-05-31 15:40:42.853629 | orchestrator | 2025-05-31 15:40:42.853645 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-05-31 15:40:42.897170 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:40:42.897270 | orchestrator | 2025-05-31 15:40:42.897296 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-05-31 15:42:18.750379 | orchestrator | changed: [testbed-manager] 2025-05-31 15:42:18.750434 | orchestrator | 2025-05-31 15:42:18.750448 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-31 15:42:19.805913 | orchestrator | ok: [testbed-manager] 2025-05-31 15:42:19.806642 | orchestrator | 2025-05-31 15:42:19.806664 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 15:42:19.806676 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-05-31 15:42:19.806685 | orchestrator | 2025-05-31 15:42:20.290567 | orchestrator | ok: Runtime: 0:02:16.441560 2025-05-31 15:42:20.310331 | 2025-05-31 15:42:20.310495 | TASK [Reboot manager] 2025-05-31 15:42:21.848002 | orchestrator | ok: Runtime: 0:00:00.937608 2025-05-31 15:42:21.865620 | 2025-05-31 15:42:21.865805 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-31 15:42:35.756898 | orchestrator | ok 2025-05-31 15:42:35.767099 | 2025-05-31 15:42:35.767254 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-31 15:43:35.814288 | orchestrator | ok 2025-05-31 15:43:35.823313 | 2025-05-31 15:43:35.823431 | TASK [Deploy manager + bootstrap nodes] 2025-05-31 15:43:38.230793 | orchestrator | 2025-05-31 15:43:38.231141 | orchestrator | # DEPLOY MANAGER 2025-05-31 15:43:38.231172 | orchestrator | 2025-05-31 15:43:38.231188 | orchestrator | + set -e 2025-05-31 15:43:38.231202 | orchestrator | + echo 2025-05-31 15:43:38.231216 | orchestrator | + echo '# DEPLOY MANAGER' 2025-05-31 15:43:38.231234 | orchestrator | + echo 2025-05-31 15:43:38.231287 | orchestrator | + cat /opt/manager-vars.sh 2025-05-31 15:43:38.234068 | orchestrator | export NUMBER_OF_NODES=6 2025-05-31 15:43:38.234109 | orchestrator | 2025-05-31 15:43:38.234124 | orchestrator | export CEPH_VERSION=reef 2025-05-31 15:43:38.234138 | orchestrator | export CONFIGURATION_VERSION=main 2025-05-31 15:43:38.234151 | orchestrator | export MANAGER_VERSION=8.1.0 2025-05-31 15:43:38.234177 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-05-31 15:43:38.234195 | orchestrator | 2025-05-31 15:43:38.234215 | orchestrator | export ARA=false 2025-05-31 15:43:38.234226 | orchestrator | export TEMPEST=false 2025-05-31 15:43:38.234245 | orchestrator | export IS_ZUUL=true 2025-05-31 15:43:38.234257 | orchestrator | 2025-05-31 15:43:38.234275 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.95 2025-05-31 15:43:38.234287 | orchestrator | export EXTERNAL_API=false 2025-05-31 15:43:38.234298 | orchestrator | 2025-05-31 15:43:38.234320 | orchestrator | export IMAGE_USER=ubuntu 2025-05-31 15:43:38.234331 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-05-31 15:43:38.234342 | orchestrator | 2025-05-31 15:43:38.234356 | orchestrator | export CEPH_STACK=ceph-ansible 2025-05-31 15:43:38.234377 | orchestrator | 2025-05-31 15:43:38.234389 | orchestrator | + echo 2025-05-31 15:43:38.234400 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-31 15:43:38.235036 | orchestrator | ++ export INTERACTIVE=false 2025-05-31 15:43:38.235055 | orchestrator | ++ INTERACTIVE=false 2025-05-31 15:43:38.235071 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-31 15:43:38.235082 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-31 15:43:38.235271 | orchestrator | + source /opt/manager-vars.sh 2025-05-31 15:43:38.235288 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-31 15:43:38.235299 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-31 15:43:38.235355 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-31 15:43:38.235369 | orchestrator | ++ CEPH_VERSION=reef 2025-05-31 15:43:38.235380 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-31 15:43:38.235391 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-31 15:43:38.235402 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-05-31 15:43:38.235413 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-05-31 15:43:38.235424 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-31 15:43:38.235435 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-31 15:43:38.235454 | orchestrator | ++ export ARA=false 2025-05-31 15:43:38.235465 | orchestrator | ++ ARA=false 2025-05-31 15:43:38.235485 | orchestrator | ++ export TEMPEST=false 2025-05-31 15:43:38.235496 | orchestrator | ++ TEMPEST=false 2025-05-31 15:43:38.235507 | orchestrator | ++ export IS_ZUUL=true 2025-05-31 15:43:38.235518 | orchestrator | ++ IS_ZUUL=true 2025-05-31 15:43:38.235534 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.95 2025-05-31 15:43:38.235546 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.95 2025-05-31 15:43:38.235557 | orchestrator | ++ export EXTERNAL_API=false 2025-05-31 15:43:38.235568 | orchestrator | ++ EXTERNAL_API=false 2025-05-31 15:43:38.235578 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-31 15:43:38.235589 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-31 15:43:38.235600 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-31 15:43:38.235611 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-31 15:43:38.235622 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-31 15:43:38.235633 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-31 15:43:38.235645 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-05-31 15:43:38.278813 | orchestrator | + docker version 2025-05-31 15:43:38.513815 | orchestrator | Client: Docker Engine - Community 2025-05-31 15:43:38.513943 | orchestrator | Version: 26.1.4 2025-05-31 15:43:38.514142 | orchestrator | API version: 1.45 2025-05-31 15:43:38.514161 | orchestrator | Go version: go1.21.11 2025-05-31 15:43:38.514171 | orchestrator | Git commit: 5650f9b 2025-05-31 15:43:38.514182 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-05-31 15:43:38.514192 | orchestrator | OS/Arch: linux/amd64 2025-05-31 15:43:38.514204 | orchestrator | Context: default 2025-05-31 15:43:38.514213 | orchestrator | 2025-05-31 15:43:38.514224 | orchestrator | Server: Docker Engine - Community 2025-05-31 15:43:38.514234 | orchestrator | Engine: 2025-05-31 15:43:38.514244 | orchestrator | Version: 26.1.4 2025-05-31 15:43:38.514253 | orchestrator | API version: 1.45 (minimum version 1.24) 2025-05-31 15:43:38.514263 | orchestrator | Go version: go1.21.11 2025-05-31 15:43:38.514272 | orchestrator | Git commit: de5c9cf 2025-05-31 15:43:38.514315 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-05-31 15:43:38.514325 | orchestrator | OS/Arch: linux/amd64 2025-05-31 15:43:38.514335 | orchestrator | Experimental: false 2025-05-31 15:43:38.514345 | orchestrator | containerd: 2025-05-31 15:43:38.514354 | orchestrator | Version: 1.7.27 2025-05-31 15:43:38.514364 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-05-31 15:43:38.514374 | orchestrator | runc: 2025-05-31 15:43:38.514384 | orchestrator | Version: 1.2.5 2025-05-31 15:43:38.514394 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-05-31 15:43:38.514403 | orchestrator | docker-init: 2025-05-31 15:43:38.514413 | orchestrator | Version: 0.19.0 2025-05-31 15:43:38.514423 | orchestrator | GitCommit: de40ad0 2025-05-31 15:43:38.516722 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-05-31 15:43:38.525092 | orchestrator | + set -e 2025-05-31 15:43:38.525123 | orchestrator | + source /opt/manager-vars.sh 2025-05-31 15:43:38.525135 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-31 15:43:38.525146 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-31 15:43:38.525157 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-31 15:43:38.526007 | orchestrator | ++ CEPH_VERSION=reef 2025-05-31 15:43:38.526078 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-31 15:43:38.526093 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-31 15:43:38.526106 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-05-31 15:43:38.526119 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-05-31 15:43:38.526132 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-31 15:43:38.526145 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-31 15:43:38.526158 | orchestrator | ++ export ARA=false 2025-05-31 15:43:38.526170 | orchestrator | ++ ARA=false 2025-05-31 15:43:38.526183 | orchestrator | ++ export TEMPEST=false 2025-05-31 15:43:38.526195 | orchestrator | ++ TEMPEST=false 2025-05-31 15:43:38.526206 | orchestrator | ++ export IS_ZUUL=true 2025-05-31 15:43:38.526217 | orchestrator | ++ IS_ZUUL=true 2025-05-31 15:43:38.526228 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.95 2025-05-31 15:43:38.526240 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.95 2025-05-31 15:43:38.526251 | orchestrator | ++ export EXTERNAL_API=false 2025-05-31 15:43:38.526262 | orchestrator | ++ EXTERNAL_API=false 2025-05-31 15:43:38.526272 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-31 15:43:38.526283 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-31 15:43:38.526294 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-31 15:43:38.526305 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-31 15:43:38.526316 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-31 15:43:38.526327 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-31 15:43:38.526338 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-31 15:43:38.526349 | orchestrator | ++ export INTERACTIVE=false 2025-05-31 15:43:38.526360 | orchestrator | ++ INTERACTIVE=false 2025-05-31 15:43:38.526370 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-31 15:43:38.526381 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-31 15:43:38.526392 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-31 15:43:38.526403 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 8.1.0 2025-05-31 15:43:38.532801 | orchestrator | + set -e 2025-05-31 15:43:38.532826 | orchestrator | + VERSION=8.1.0 2025-05-31 15:43:38.532842 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 8.1.0/g' /opt/configuration/environments/manager/configuration.yml 2025-05-31 15:43:38.539166 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-31 15:43:38.539199 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-05-31 15:43:38.543734 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-05-31 15:43:38.548028 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-05-31 15:43:38.555577 | orchestrator | /opt/configuration ~ 2025-05-31 15:43:38.555621 | orchestrator | + set -e 2025-05-31 15:43:38.555633 | orchestrator | + pushd /opt/configuration 2025-05-31 15:43:38.555645 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-31 15:43:38.558124 | orchestrator | + source /opt/venv/bin/activate 2025-05-31 15:43:38.558886 | orchestrator | ++ deactivate nondestructive 2025-05-31 15:43:38.558914 | orchestrator | ++ '[' -n '' ']' 2025-05-31 15:43:38.558926 | orchestrator | ++ '[' -n '' ']' 2025-05-31 15:43:38.558938 | orchestrator | ++ hash -r 2025-05-31 15:43:38.558955 | orchestrator | ++ '[' -n '' ']' 2025-05-31 15:43:38.559007 | orchestrator | ++ unset VIRTUAL_ENV 2025-05-31 15:43:38.559027 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-05-31 15:43:38.559046 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-05-31 15:43:38.559152 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-05-31 15:43:38.559197 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-05-31 15:43:38.559209 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-05-31 15:43:38.559220 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-05-31 15:43:38.559232 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-31 15:43:38.559255 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-31 15:43:38.559271 | orchestrator | ++ export PATH 2025-05-31 15:43:38.559282 | orchestrator | ++ '[' -n '' ']' 2025-05-31 15:43:38.559293 | orchestrator | ++ '[' -z '' ']' 2025-05-31 15:43:38.559312 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-05-31 15:43:38.559323 | orchestrator | ++ PS1='(venv) ' 2025-05-31 15:43:38.559333 | orchestrator | ++ export PS1 2025-05-31 15:43:38.559348 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-05-31 15:43:38.559359 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-05-31 15:43:38.559370 | orchestrator | ++ hash -r 2025-05-31 15:43:38.559438 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-05-31 15:43:39.511095 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-05-31 15:43:39.511209 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.3) 2025-05-31 15:43:39.512418 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-05-31 15:43:39.513549 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-05-31 15:43:39.514706 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2025-05-31 15:43:39.524390 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.2.1) 2025-05-31 15:43:39.525915 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-05-31 15:43:39.526817 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.19) 2025-05-31 15:43:39.528210 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-05-31 15:43:39.557600 | orchestrator | Requirement already satisfied: charset-normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.2) 2025-05-31 15:43:39.558869 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-05-31 15:43:39.560368 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.4.0) 2025-05-31 15:43:39.561993 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.4.26) 2025-05-31 15:43:39.565820 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-05-31 15:43:39.760604 | orchestrator | ++ which gilt 2025-05-31 15:43:39.764328 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-05-31 15:43:39.764351 | orchestrator | + /opt/venv/bin/gilt overlay 2025-05-31 15:43:39.957590 | orchestrator | osism.cfg-generics: 2025-05-31 15:43:39.957721 | orchestrator | - cloning osism.cfg-generics to /home/dragon/.gilt/clone/github.com/osism.cfg-generics 2025-05-31 15:43:41.486183 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-05-31 15:43:41.486335 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-05-31 15:43:41.486390 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-05-31 15:43:41.486406 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-05-31 15:43:42.415119 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-05-31 15:43:42.425609 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-05-31 15:43:42.716441 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-05-31 15:43:42.762874 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-31 15:43:42.763007 | orchestrator | + deactivate 2025-05-31 15:43:42.763027 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-05-31 15:43:42.763055 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-31 15:43:42.763078 | orchestrator | + export PATH 2025-05-31 15:43:42.763089 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-05-31 15:43:42.763103 | orchestrator | + '[' -n '' ']' 2025-05-31 15:43:42.763114 | orchestrator | + hash -r 2025-05-31 15:43:42.763125 | orchestrator | + '[' -n '' ']' 2025-05-31 15:43:42.763137 | orchestrator | + unset VIRTUAL_ENV 2025-05-31 15:43:42.763148 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-05-31 15:43:42.763159 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-05-31 15:43:42.763171 | orchestrator | + unset -f deactivate 2025-05-31 15:43:42.763182 | orchestrator | ~ 2025-05-31 15:43:42.763195 | orchestrator | + popd 2025-05-31 15:43:42.764783 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-05-31 15:43:42.764831 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-05-31 15:43:42.765295 | orchestrator | ++ semver 8.1.0 7.0.0 2025-05-31 15:43:42.815760 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-31 15:43:42.815868 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-05-31 15:43:42.815889 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-05-31 15:43:42.865924 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-31 15:43:42.865954 | orchestrator | + source /opt/venv/bin/activate 2025-05-31 15:43:42.866007 | orchestrator | ++ deactivate nondestructive 2025-05-31 15:43:42.866062 | orchestrator | ++ '[' -n '' ']' 2025-05-31 15:43:42.866075 | orchestrator | ++ '[' -n '' ']' 2025-05-31 15:43:42.866092 | orchestrator | ++ hash -r 2025-05-31 15:43:42.866105 | orchestrator | ++ '[' -n '' ']' 2025-05-31 15:43:42.866116 | orchestrator | ++ unset VIRTUAL_ENV 2025-05-31 15:43:42.866127 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-05-31 15:43:42.866138 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-05-31 15:43:42.866224 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-05-31 15:43:42.866239 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-05-31 15:43:42.866250 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-05-31 15:43:42.866278 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-05-31 15:43:42.866290 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-31 15:43:42.866306 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-31 15:43:42.866319 | orchestrator | ++ export PATH 2025-05-31 15:43:42.866342 | orchestrator | ++ '[' -n '' ']' 2025-05-31 15:43:42.866353 | orchestrator | ++ '[' -z '' ']' 2025-05-31 15:43:42.866488 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-05-31 15:43:42.866504 | orchestrator | ++ PS1='(venv) ' 2025-05-31 15:43:42.866516 | orchestrator | ++ export PS1 2025-05-31 15:43:42.866527 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-05-31 15:43:42.866538 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-05-31 15:43:42.866549 | orchestrator | ++ hash -r 2025-05-31 15:43:42.866691 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-05-31 15:43:43.971826 | orchestrator | 2025-05-31 15:43:43.971965 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-05-31 15:43:43.971995 | orchestrator | 2025-05-31 15:43:43.972002 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-31 15:43:44.552150 | orchestrator | ok: [testbed-manager] 2025-05-31 15:43:44.552260 | orchestrator | 2025-05-31 15:43:44.552278 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-31 15:43:45.494104 | orchestrator | changed: [testbed-manager] 2025-05-31 15:43:45.494234 | orchestrator | 2025-05-31 15:43:45.494261 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-05-31 15:43:45.494282 | orchestrator | 2025-05-31 15:43:45.494301 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-31 15:43:47.704764 | orchestrator | ok: [testbed-manager] 2025-05-31 15:43:47.704877 | orchestrator | 2025-05-31 15:43:47.704893 | orchestrator | TASK [Pull images] ************************************************************* 2025-05-31 15:43:52.597810 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ara-server:1.7.2) 2025-05-31 15:43:52.597946 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/mariadb:11.6.2) 2025-05-31 15:43:52.597974 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ceph-ansible:8.1.0) 2025-05-31 15:43:52.598125 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/inventory-reconciler:8.1.0) 2025-05-31 15:43:52.598138 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/kolla-ansible:8.1.0) 2025-05-31 15:43:52.598155 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/redis:7.4.1-alpine) 2025-05-31 15:43:52.598167 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/netbox:v4.1.7) 2025-05-31 15:43:52.598180 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism-ansible:8.1.0) 2025-05-31 15:43:52.598192 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism:0.20241219.2) 2025-05-31 15:43:52.598203 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/postgres:16.6-alpine) 2025-05-31 15:43:52.598215 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/traefik:v3.2.1) 2025-05-31 15:43:52.598226 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/hashicorp/vault:1.18.2) 2025-05-31 15:43:52.598237 | orchestrator | 2025-05-31 15:43:52.598250 | orchestrator | TASK [Check status] ************************************************************ 2025-05-31 15:45:08.564985 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-31 15:45:08.565096 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-05-31 15:45:08.565112 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (118 retries left). 2025-05-31 15:45:08.565123 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (117 retries left). 2025-05-31 15:45:08.565148 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j482250689656.1586', 'results_file': '/home/dragon/.ansible_async/j482250689656.1586', 'changed': True, 'item': 'registry.osism.tech/osism/ara-server:1.7.2', 'ansible_loop_var': 'item'}) 2025-05-31 15:45:08.565169 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j600396214455.1611', 'results_file': '/home/dragon/.ansible_async/j600396214455.1611', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/mariadb:11.6.2', 'ansible_loop_var': 'item'}) 2025-05-31 15:45:08.565186 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-31 15:45:08.565197 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-05-31 15:45:08.565209 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j668242877690.1636', 'results_file': '/home/dragon/.ansible_async/j668242877690.1636', 'changed': True, 'item': 'registry.osism.tech/osism/ceph-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-31 15:45:08.565220 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j779653550823.1667', 'results_file': '/home/dragon/.ansible_async/j779653550823.1667', 'changed': True, 'item': 'registry.osism.tech/osism/inventory-reconciler:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-31 15:45:08.565232 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j618274419775.1699', 'results_file': '/home/dragon/.ansible_async/j618274419775.1699', 'changed': True, 'item': 'registry.osism.tech/osism/kolla-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-31 15:45:08.565243 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j723844950263.1733', 'results_file': '/home/dragon/.ansible_async/j723844950263.1733', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/redis:7.4.1-alpine', 'ansible_loop_var': 'item'}) 2025-05-31 15:45:08.565254 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-31 15:45:08.565299 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j375618951735.1765', 'results_file': '/home/dragon/.ansible_async/j375618951735.1765', 'changed': True, 'item': 'registry.osism.tech/osism/netbox:v4.1.7', 'ansible_loop_var': 'item'}) 2025-05-31 15:45:08.565315 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j30360951436.1800', 'results_file': '/home/dragon/.ansible_async/j30360951436.1800', 'changed': True, 'item': 'registry.osism.tech/osism/osism-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-31 15:45:08.565326 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j730846075311.1832', 'results_file': '/home/dragon/.ansible_async/j730846075311.1832', 'changed': True, 'item': 'registry.osism.tech/osism/osism:0.20241219.2', 'ansible_loop_var': 'item'}) 2025-05-31 15:45:08.565338 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j175071771922.1872', 'results_file': '/home/dragon/.ansible_async/j175071771922.1872', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/postgres:16.6-alpine', 'ansible_loop_var': 'item'}) 2025-05-31 15:45:08.565349 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j838321177071.1898', 'results_file': '/home/dragon/.ansible_async/j838321177071.1898', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/traefik:v3.2.1', 'ansible_loop_var': 'item'}) 2025-05-31 15:45:08.565360 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j862523102263.1930', 'results_file': '/home/dragon/.ansible_async/j862523102263.1930', 'changed': True, 'item': 'registry.osism.tech/dockerhub/hashicorp/vault:1.18.2', 'ansible_loop_var': 'item'}) 2025-05-31 15:45:08.565376 | orchestrator | 2025-05-31 15:45:08.565397 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-05-31 15:45:08.608282 | orchestrator | ok: [testbed-manager] 2025-05-31 15:45:08.608369 | orchestrator | 2025-05-31 15:45:08.608389 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-05-31 15:45:09.076581 | orchestrator | changed: [testbed-manager] 2025-05-31 15:45:09.076711 | orchestrator | 2025-05-31 15:45:09.076727 | orchestrator | TASK [Add netbox_postgres_volume_type parameter] ******************************* 2025-05-31 15:45:09.403588 | orchestrator | changed: [testbed-manager] 2025-05-31 15:45:09.403689 | orchestrator | 2025-05-31 15:45:09.403705 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-31 15:45:09.749428 | orchestrator | changed: [testbed-manager] 2025-05-31 15:45:09.749533 | orchestrator | 2025-05-31 15:45:09.749550 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-05-31 15:45:09.806351 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:45:09.806433 | orchestrator | 2025-05-31 15:45:09.806448 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-05-31 15:45:10.141371 | orchestrator | ok: [testbed-manager] 2025-05-31 15:45:10.141475 | orchestrator | 2025-05-31 15:45:10.141490 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-05-31 15:45:10.237996 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:45:10.238138 | orchestrator | 2025-05-31 15:45:10.238155 | orchestrator | PLAY [Apply role traefik & netbox] ********************************************* 2025-05-31 15:45:10.238168 | orchestrator | 2025-05-31 15:45:10.238179 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-31 15:45:11.983598 | orchestrator | ok: [testbed-manager] 2025-05-31 15:45:11.983700 | orchestrator | 2025-05-31 15:45:11.983717 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-05-31 15:45:12.083096 | orchestrator | included: osism.services.traefik for testbed-manager 2025-05-31 15:45:12.083182 | orchestrator | 2025-05-31 15:45:12.083198 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-05-31 15:45:12.137903 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-05-31 15:45:12.138119 | orchestrator | 2025-05-31 15:45:12.138137 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-05-31 15:45:13.205661 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-05-31 15:45:13.205764 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-05-31 15:45:13.205778 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-05-31 15:45:13.205791 | orchestrator | 2025-05-31 15:45:13.205803 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-05-31 15:45:14.985645 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-05-31 15:45:14.985752 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-05-31 15:45:14.985768 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-05-31 15:45:14.985779 | orchestrator | 2025-05-31 15:45:14.985790 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-05-31 15:45:15.592664 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-31 15:45:15.592777 | orchestrator | changed: [testbed-manager] 2025-05-31 15:45:15.592795 | orchestrator | 2025-05-31 15:45:15.592835 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-05-31 15:45:16.201735 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-31 15:45:16.201839 | orchestrator | changed: [testbed-manager] 2025-05-31 15:45:16.201856 | orchestrator | 2025-05-31 15:45:16.201870 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-05-31 15:45:16.258623 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:45:16.258707 | orchestrator | 2025-05-31 15:45:16.258722 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-05-31 15:45:16.603982 | orchestrator | ok: [testbed-manager] 2025-05-31 15:45:16.604289 | orchestrator | 2025-05-31 15:45:16.604316 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-05-31 15:45:16.659230 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-05-31 15:45:16.659320 | orchestrator | 2025-05-31 15:45:16.659334 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-05-31 15:45:17.677614 | orchestrator | changed: [testbed-manager] 2025-05-31 15:45:17.677722 | orchestrator | 2025-05-31 15:45:17.677739 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-05-31 15:45:18.454491 | orchestrator | changed: [testbed-manager] 2025-05-31 15:45:18.454619 | orchestrator | 2025-05-31 15:45:18.454644 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-05-31 15:45:21.667901 | orchestrator | changed: [testbed-manager] 2025-05-31 15:45:21.668078 | orchestrator | 2025-05-31 15:45:21.668096 | orchestrator | TASK [Apply netbox role] ******************************************************* 2025-05-31 15:45:21.792468 | orchestrator | included: osism.services.netbox for testbed-manager 2025-05-31 15:45:21.792584 | orchestrator | 2025-05-31 15:45:21.792604 | orchestrator | TASK [osism.services.netbox : Include install tasks] *************************** 2025-05-31 15:45:21.858215 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/install-Debian-family.yml for testbed-manager 2025-05-31 15:45:21.858335 | orchestrator | 2025-05-31 15:45:21.858352 | orchestrator | TASK [osism.services.netbox : Install required packages] *********************** 2025-05-31 15:45:24.330128 | orchestrator | ok: [testbed-manager] 2025-05-31 15:45:24.330235 | orchestrator | 2025-05-31 15:45:24.330252 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-05-31 15:45:24.441146 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config.yml for testbed-manager 2025-05-31 15:45:24.441258 | orchestrator | 2025-05-31 15:45:24.441271 | orchestrator | TASK [osism.services.netbox : Create required directories] ********************* 2025-05-31 15:45:25.532385 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox) 2025-05-31 15:45:25.532491 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration) 2025-05-31 15:45:25.532508 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/secrets) 2025-05-31 15:45:25.532550 | orchestrator | 2025-05-31 15:45:25.532563 | orchestrator | TASK [osism.services.netbox : Include postgres config tasks] ******************* 2025-05-31 15:45:25.594308 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-postgres.yml for testbed-manager 2025-05-31 15:45:25.594411 | orchestrator | 2025-05-31 15:45:25.594431 | orchestrator | TASK [osism.services.netbox : Copy postgres environment files] ***************** 2025-05-31 15:45:26.241834 | orchestrator | changed: [testbed-manager] => (item=postgres) 2025-05-31 15:45:26.241970 | orchestrator | 2025-05-31 15:45:26.241987 | orchestrator | TASK [osism.services.netbox : Copy postgres configuration file] **************** 2025-05-31 15:45:26.876751 | orchestrator | changed: [testbed-manager] 2025-05-31 15:45:26.876856 | orchestrator | 2025-05-31 15:45:26.876871 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-05-31 15:45:27.522215 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-31 15:45:27.522325 | orchestrator | changed: [testbed-manager] 2025-05-31 15:45:27.522349 | orchestrator | 2025-05-31 15:45:27.522369 | orchestrator | TASK [osism.services.netbox : Create docker-entrypoint-initdb.d directory] ***** 2025-05-31 15:45:27.908389 | orchestrator | changed: [testbed-manager] 2025-05-31 15:45:27.908490 | orchestrator | 2025-05-31 15:45:27.908506 | orchestrator | TASK [osism.services.netbox : Check if init.sql file exists] ******************* 2025-05-31 15:45:28.245586 | orchestrator | ok: [testbed-manager] 2025-05-31 15:45:28.245699 | orchestrator | 2025-05-31 15:45:28.245717 | orchestrator | TASK [osism.services.netbox : Copy init.sql file] ****************************** 2025-05-31 15:45:28.294704 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:45:28.294796 | orchestrator | 2025-05-31 15:45:28.294813 | orchestrator | TASK [osism.services.netbox : Create init-netbox-database.sh script] *********** 2025-05-31 15:45:28.896105 | orchestrator | changed: [testbed-manager] 2025-05-31 15:45:28.896207 | orchestrator | 2025-05-31 15:45:28.896222 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-05-31 15:45:28.958972 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-netbox.yml for testbed-manager 2025-05-31 15:45:28.959055 | orchestrator | 2025-05-31 15:45:28.959069 | orchestrator | TASK [osism.services.netbox : Create directories required by netbox] *********** 2025-05-31 15:45:29.707078 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/initializers) 2025-05-31 15:45:29.707181 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/startup-scripts) 2025-05-31 15:45:29.707196 | orchestrator | 2025-05-31 15:45:29.707210 | orchestrator | TASK [osism.services.netbox : Copy netbox environment files] ******************* 2025-05-31 15:45:30.320393 | orchestrator | changed: [testbed-manager] => (item=netbox) 2025-05-31 15:45:30.320514 | orchestrator | 2025-05-31 15:45:30.320538 | orchestrator | TASK [osism.services.netbox : Copy netbox configuration file] ****************** 2025-05-31 15:45:30.927571 | orchestrator | changed: [testbed-manager] 2025-05-31 15:45:30.927677 | orchestrator | 2025-05-31 15:45:30.927692 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (<= 1.26)] **** 2025-05-31 15:45:30.970089 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:45:30.970183 | orchestrator | 2025-05-31 15:45:30.970198 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (> 1.26)] ***** 2025-05-31 15:45:31.573565 | orchestrator | changed: [testbed-manager] 2025-05-31 15:45:31.573666 | orchestrator | 2025-05-31 15:45:31.573682 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-05-31 15:45:33.312972 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-31 15:45:33.313087 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-31 15:45:33.313103 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-31 15:45:33.313116 | orchestrator | changed: [testbed-manager] 2025-05-31 15:45:33.313130 | orchestrator | 2025-05-31 15:45:33.313143 | orchestrator | TASK [osism.services.netbox : Deploy initializers for netbox] ****************** 2025-05-31 15:45:39.077137 | orchestrator | changed: [testbed-manager] => (item=custom_fields) 2025-05-31 15:45:39.077251 | orchestrator | changed: [testbed-manager] => (item=device_roles) 2025-05-31 15:45:39.077266 | orchestrator | changed: [testbed-manager] => (item=device_types) 2025-05-31 15:45:39.077277 | orchestrator | changed: [testbed-manager] => (item=groups) 2025-05-31 15:45:39.077310 | orchestrator | changed: [testbed-manager] => (item=manufacturers) 2025-05-31 15:45:39.077321 | orchestrator | changed: [testbed-manager] => (item=object_permissions) 2025-05-31 15:45:39.077330 | orchestrator | changed: [testbed-manager] => (item=prefix_vlan_roles) 2025-05-31 15:45:39.077358 | orchestrator | changed: [testbed-manager] => (item=sites) 2025-05-31 15:45:39.077369 | orchestrator | changed: [testbed-manager] => (item=tags) 2025-05-31 15:45:39.077379 | orchestrator | changed: [testbed-manager] => (item=users) 2025-05-31 15:45:39.077389 | orchestrator | 2025-05-31 15:45:39.077400 | orchestrator | TASK [osism.services.netbox : Deploy startup scripts for netbox] *************** 2025-05-31 15:45:39.702358 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/files/startup-scripts/270_tags.py) 2025-05-31 15:45:39.702460 | orchestrator | 2025-05-31 15:45:39.702476 | orchestrator | TASK [osism.services.netbox : Include service tasks] *************************** 2025-05-31 15:45:39.794206 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/service.yml for testbed-manager 2025-05-31 15:45:39.794298 | orchestrator | 2025-05-31 15:45:39.794312 | orchestrator | TASK [osism.services.netbox : Copy netbox systemd unit file] ******************* 2025-05-31 15:45:40.487544 | orchestrator | changed: [testbed-manager] 2025-05-31 15:45:40.487616 | orchestrator | 2025-05-31 15:45:40.487623 | orchestrator | TASK [osism.services.netbox : Create traefik external network] ***************** 2025-05-31 15:45:41.095149 | orchestrator | ok: [testbed-manager] 2025-05-31 15:45:41.095280 | orchestrator | 2025-05-31 15:45:41.095299 | orchestrator | TASK [osism.services.netbox : Copy docker-compose.yml file] ******************** 2025-05-31 15:45:41.800530 | orchestrator | changed: [testbed-manager] 2025-05-31 15:45:41.800638 | orchestrator | 2025-05-31 15:45:41.800653 | orchestrator | TASK [osism.services.netbox : Pull container images] *************************** 2025-05-31 15:45:43.679331 | orchestrator | ok: [testbed-manager] 2025-05-31 15:45:43.679473 | orchestrator | 2025-05-31 15:45:43.679490 | orchestrator | TASK [osism.services.netbox : Stop and disable old service docker-compose@netbox] *** 2025-05-31 15:45:44.615990 | orchestrator | ok: [testbed-manager] 2025-05-31 15:45:44.616137 | orchestrator | 2025-05-31 15:45:44.616155 | orchestrator | TASK [osism.services.netbox : Manage netbox service] *************************** 2025-05-31 15:46:06.808496 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage netbox service (10 retries left). 2025-05-31 15:46:06.808651 | orchestrator | ok: [testbed-manager] 2025-05-31 15:46:06.808670 | orchestrator | 2025-05-31 15:46:06.808684 | orchestrator | TASK [osism.services.netbox : Register that netbox service was started] ******** 2025-05-31 15:46:06.865474 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:46:06.865510 | orchestrator | 2025-05-31 15:46:06.865523 | orchestrator | TASK [osism.services.netbox : Flush handlers] ********************************** 2025-05-31 15:46:06.865535 | orchestrator | 2025-05-31 15:46:06.865547 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-05-31 15:46:06.906847 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:46:06.906915 | orchestrator | 2025-05-31 15:46:06.906930 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-05-31 15:46:06.996266 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/restart-service.yml for testbed-manager 2025-05-31 15:46:06.996385 | orchestrator | 2025-05-31 15:46:06.996401 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres container] ****** 2025-05-31 15:46:07.814949 | orchestrator | ok: [testbed-manager] 2025-05-31 15:46:07.815080 | orchestrator | 2025-05-31 15:46:07.815096 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres container version fact] *** 2025-05-31 15:46:07.887235 | orchestrator | ok: [testbed-manager] 2025-05-31 15:46:07.887308 | orchestrator | 2025-05-31 15:46:07.887327 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres container] *** 2025-05-31 15:46:07.938211 | orchestrator | ok: [testbed-manager] => { 2025-05-31 15:46:07.938239 | orchestrator | "msg": "The major version of the running postgres container is 16" 2025-05-31 15:46:07.938252 | orchestrator | } 2025-05-31 15:46:07.938263 | orchestrator | 2025-05-31 15:46:07.938275 | orchestrator | RUNNING HANDLER [osism.services.netbox : Pull postgres image] ****************** 2025-05-31 15:46:08.564308 | orchestrator | ok: [testbed-manager] 2025-05-31 15:46:08.564477 | orchestrator | 2025-05-31 15:46:08.564494 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres image] ********** 2025-05-31 15:46:09.471493 | orchestrator | ok: [testbed-manager] 2025-05-31 15:46:09.471625 | orchestrator | 2025-05-31 15:46:09.471642 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres image version fact] ****** 2025-05-31 15:46:09.535660 | orchestrator | ok: [testbed-manager] 2025-05-31 15:46:09.535699 | orchestrator | 2025-05-31 15:46:09.535712 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres image] *** 2025-05-31 15:46:09.589587 | orchestrator | ok: [testbed-manager] => { 2025-05-31 15:46:09.589658 | orchestrator | "msg": "The major version of the postgres image is 16" 2025-05-31 15:46:09.589673 | orchestrator | } 2025-05-31 15:46:09.589684 | orchestrator | 2025-05-31 15:46:09.589696 | orchestrator | RUNNING HANDLER [osism.services.netbox : Stop netbox service] ****************** 2025-05-31 15:46:09.655111 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:46:09.655166 | orchestrator | 2025-05-31 15:46:09.655178 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to stop] ****** 2025-05-31 15:46:09.718864 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:46:09.718916 | orchestrator | 2025-05-31 15:46:09.718928 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres volume] ********* 2025-05-31 15:46:09.783926 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:46:09.783962 | orchestrator | 2025-05-31 15:46:09.783974 | orchestrator | RUNNING HANDLER [osism.services.netbox : Upgrade postgres database] ************ 2025-05-31 15:46:09.842274 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:46:09.842297 | orchestrator | 2025-05-31 15:46:09.842308 | orchestrator | RUNNING HANDLER [osism.services.netbox : Remove netbox-pgautoupgrade container] *** 2025-05-31 15:46:09.906629 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:46:09.906657 | orchestrator | 2025-05-31 15:46:09.906669 | orchestrator | RUNNING HANDLER [osism.services.netbox : Start netbox service] ***************** 2025-05-31 15:46:09.967412 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:46:09.967464 | orchestrator | 2025-05-31 15:46:09.967482 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-05-31 15:46:11.481306 | orchestrator | changed: [testbed-manager] 2025-05-31 15:46:11.481447 | orchestrator | 2025-05-31 15:46:11.481463 | orchestrator | RUNNING HANDLER [osism.services.netbox : Register that netbox service was started] *** 2025-05-31 15:46:11.558155 | orchestrator | ok: [testbed-manager] 2025-05-31 15:46:11.558274 | orchestrator | 2025-05-31 15:46:11.558289 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to start] ***** 2025-05-31 15:47:11.612151 | orchestrator | Pausing for 60 seconds 2025-05-31 15:47:11.612283 | orchestrator | changed: [testbed-manager] 2025-05-31 15:47:11.612300 | orchestrator | 2025-05-31 15:47:11.612315 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for an healthy netbox service] *** 2025-05-31 15:47:11.665667 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/wait-for-healthy-service.yml for testbed-manager 2025-05-31 15:47:11.665763 | orchestrator | 2025-05-31 15:47:11.665778 | orchestrator | RUNNING HANDLER [osism.services.netbox : Check that all containers are in a good state] *** 2025-05-31 15:51:23.237129 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (60 retries left). 2025-05-31 15:51:23.237263 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (59 retries left). 2025-05-31 15:51:23.237279 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (58 retries left). 2025-05-31 15:51:23.237296 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (57 retries left). 2025-05-31 15:51:23.237320 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (56 retries left). 2025-05-31 15:51:23.237340 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (55 retries left). 2025-05-31 15:51:23.237359 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (54 retries left). 2025-05-31 15:51:23.237396 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (53 retries left). 2025-05-31 15:51:23.237409 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (52 retries left). 2025-05-31 15:51:23.237450 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (51 retries left). 2025-05-31 15:51:23.237462 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (50 retries left). 2025-05-31 15:51:23.237473 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (49 retries left). 2025-05-31 15:51:23.237483 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (48 retries left). 2025-05-31 15:51:23.237494 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (47 retries left). 2025-05-31 15:51:23.237505 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (46 retries left). 2025-05-31 15:51:23.237518 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (45 retries left). 2025-05-31 15:51:23.237529 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (44 retries left). 2025-05-31 15:51:23.237540 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (43 retries left). 2025-05-31 15:51:23.237551 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (42 retries left). 2025-05-31 15:51:23.237561 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (41 retries left). 2025-05-31 15:51:23.237572 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (40 retries left). 2025-05-31 15:51:23.237583 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (39 retries left). 2025-05-31 15:51:23.237593 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (38 retries left). 2025-05-31 15:51:23.237604 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (37 retries left). 2025-05-31 15:51:23.237646 | orchestrator | changed: [testbed-manager] 2025-05-31 15:51:23.237661 | orchestrator | 2025-05-31 15:51:23.237673 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-05-31 15:51:23.237685 | orchestrator | 2025-05-31 15:51:23.237696 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-31 15:51:25.388270 | orchestrator | ok: [testbed-manager] 2025-05-31 15:51:25.388411 | orchestrator | 2025-05-31 15:51:25.388438 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-05-31 15:51:25.546654 | orchestrator | included: osism.services.manager for testbed-manager 2025-05-31 15:51:25.546761 | orchestrator | 2025-05-31 15:51:25.546779 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-05-31 15:51:25.603560 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-05-31 15:51:25.603702 | orchestrator | 2025-05-31 15:51:25.603721 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-05-31 15:51:27.601772 | orchestrator | ok: [testbed-manager] 2025-05-31 15:51:27.601882 | orchestrator | 2025-05-31 15:51:27.601899 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-05-31 15:51:27.660874 | orchestrator | ok: [testbed-manager] 2025-05-31 15:51:27.660973 | orchestrator | 2025-05-31 15:51:27.660988 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-05-31 15:51:27.767076 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-05-31 15:51:27.767176 | orchestrator | 2025-05-31 15:51:27.767191 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-05-31 15:51:30.723550 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-05-31 15:51:30.723765 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-05-31 15:51:30.723789 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-05-31 15:51:30.723802 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-05-31 15:51:30.723838 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-05-31 15:51:30.723851 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-05-31 15:51:30.723862 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-05-31 15:51:30.723873 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-05-31 15:51:30.723885 | orchestrator | 2025-05-31 15:51:30.723898 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-05-31 15:51:31.364633 | orchestrator | changed: [testbed-manager] 2025-05-31 15:51:31.364741 | orchestrator | 2025-05-31 15:51:31.364757 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-05-31 15:51:32.099824 | orchestrator | changed: [testbed-manager] 2025-05-31 15:51:32.099933 | orchestrator | 2025-05-31 15:51:32.099949 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-05-31 15:51:32.204431 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-05-31 15:51:32.204536 | orchestrator | 2025-05-31 15:51:32.204553 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-05-31 15:51:33.455440 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-05-31 15:51:33.455535 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-05-31 15:51:33.455550 | orchestrator | 2025-05-31 15:51:33.455565 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-05-31 15:51:34.097493 | orchestrator | changed: [testbed-manager] 2025-05-31 15:51:34.097600 | orchestrator | 2025-05-31 15:51:34.097675 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-05-31 15:51:34.156199 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:51:34.156305 | orchestrator | 2025-05-31 15:51:34.156322 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-05-31 15:51:34.219144 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-05-31 15:51:34.219235 | orchestrator | 2025-05-31 15:51:34.219248 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-05-31 15:51:35.645538 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-31 15:51:35.645739 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-31 15:51:35.645767 | orchestrator | changed: [testbed-manager] 2025-05-31 15:51:35.645791 | orchestrator | 2025-05-31 15:51:35.645812 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-05-31 15:51:36.296485 | orchestrator | changed: [testbed-manager] 2025-05-31 15:51:36.296581 | orchestrator | 2025-05-31 15:51:36.296600 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-05-31 15:51:36.394185 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-netbox.yml for testbed-manager 2025-05-31 15:51:36.394287 | orchestrator | 2025-05-31 15:51:36.394302 | orchestrator | TASK [osism.services.manager : Copy secret files] ****************************** 2025-05-31 15:51:37.696962 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-31 15:51:37.697069 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-31 15:51:37.697085 | orchestrator | changed: [testbed-manager] 2025-05-31 15:51:37.697099 | orchestrator | 2025-05-31 15:51:37.697111 | orchestrator | TASK [osism.services.manager : Copy netbox environment file] ******************* 2025-05-31 15:51:38.341061 | orchestrator | changed: [testbed-manager] 2025-05-31 15:51:38.341158 | orchestrator | 2025-05-31 15:51:38.341171 | orchestrator | TASK [osism.services.manager : Copy inventory-reconciler environment file] ***** 2025-05-31 15:51:39.001130 | orchestrator | changed: [testbed-manager] 2025-05-31 15:51:39.001239 | orchestrator | 2025-05-31 15:51:39.001257 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-05-31 15:51:39.108468 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-05-31 15:51:39.108548 | orchestrator | 2025-05-31 15:51:39.108557 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-05-31 15:51:39.698499 | orchestrator | changed: [testbed-manager] 2025-05-31 15:51:39.698689 | orchestrator | 2025-05-31 15:51:39.698710 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-05-31 15:51:40.133165 | orchestrator | changed: [testbed-manager] 2025-05-31 15:51:40.133266 | orchestrator | 2025-05-31 15:51:40.133279 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-05-31 15:51:41.406771 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-05-31 15:51:41.406883 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-05-31 15:51:41.406899 | orchestrator | 2025-05-31 15:51:41.406913 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-05-31 15:51:42.169135 | orchestrator | changed: [testbed-manager] 2025-05-31 15:51:42.169241 | orchestrator | 2025-05-31 15:51:42.169257 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-05-31 15:51:42.567336 | orchestrator | ok: [testbed-manager] 2025-05-31 15:51:42.567469 | orchestrator | 2025-05-31 15:51:42.567495 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-05-31 15:51:42.910756 | orchestrator | changed: [testbed-manager] 2025-05-31 15:51:42.910858 | orchestrator | 2025-05-31 15:51:42.910875 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-05-31 15:51:42.942982 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:51:42.943060 | orchestrator | 2025-05-31 15:51:42.943074 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-05-31 15:51:43.004650 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-05-31 15:51:43.004734 | orchestrator | 2025-05-31 15:51:43.004748 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-05-31 15:51:43.054488 | orchestrator | ok: [testbed-manager] 2025-05-31 15:51:43.054552 | orchestrator | 2025-05-31 15:51:43.054566 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-05-31 15:51:45.023057 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-05-31 15:51:45.023186 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-05-31 15:51:45.023203 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-05-31 15:51:45.023216 | orchestrator | 2025-05-31 15:51:45.023230 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-05-31 15:51:45.720077 | orchestrator | changed: [testbed-manager] 2025-05-31 15:51:45.720185 | orchestrator | 2025-05-31 15:51:45.720201 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-05-31 15:51:46.420531 | orchestrator | changed: [testbed-manager] 2025-05-31 15:51:46.420680 | orchestrator | 2025-05-31 15:51:46.420702 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-05-31 15:51:47.112744 | orchestrator | changed: [testbed-manager] 2025-05-31 15:51:47.112855 | orchestrator | 2025-05-31 15:51:47.112872 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-05-31 15:51:47.191814 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-05-31 15:51:47.191903 | orchestrator | 2025-05-31 15:51:47.191914 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-05-31 15:51:47.225267 | orchestrator | ok: [testbed-manager] 2025-05-31 15:51:47.225355 | orchestrator | 2025-05-31 15:51:47.225367 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-05-31 15:51:47.864401 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-05-31 15:51:47.864505 | orchestrator | 2025-05-31 15:51:47.864522 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-05-31 15:51:47.936355 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-05-31 15:51:47.936451 | orchestrator | 2025-05-31 15:51:47.936467 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-05-31 15:51:48.631258 | orchestrator | changed: [testbed-manager] 2025-05-31 15:51:48.631363 | orchestrator | 2025-05-31 15:51:48.631379 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-05-31 15:51:49.227052 | orchestrator | ok: [testbed-manager] 2025-05-31 15:51:49.227139 | orchestrator | 2025-05-31 15:51:49.227154 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-05-31 15:51:49.280727 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:51:49.280804 | orchestrator | 2025-05-31 15:51:49.280833 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-05-31 15:51:49.331965 | orchestrator | ok: [testbed-manager] 2025-05-31 15:51:49.332041 | orchestrator | 2025-05-31 15:51:49.332056 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-05-31 15:51:50.131887 | orchestrator | changed: [testbed-manager] 2025-05-31 15:51:50.131995 | orchestrator | 2025-05-31 15:51:50.132012 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-05-31 15:52:29.770120 | orchestrator | changed: [testbed-manager] 2025-05-31 15:52:29.770251 | orchestrator | 2025-05-31 15:52:29.770277 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-05-31 15:52:30.422341 | orchestrator | ok: [testbed-manager] 2025-05-31 15:52:30.422446 | orchestrator | 2025-05-31 15:52:30.422463 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-05-31 15:52:30.479430 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:52:30.479537 | orchestrator | 2025-05-31 15:52:30.479554 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-05-31 15:52:33.229126 | orchestrator | changed: [testbed-manager] 2025-05-31 15:52:33.229232 | orchestrator | 2025-05-31 15:52:33.229250 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-05-31 15:52:33.284064 | orchestrator | ok: [testbed-manager] 2025-05-31 15:52:33.284159 | orchestrator | 2025-05-31 15:52:33.284172 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-31 15:52:33.284183 | orchestrator | 2025-05-31 15:52:33.284192 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-05-31 15:52:33.342816 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:52:33.342904 | orchestrator | 2025-05-31 15:52:33.342918 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-05-31 15:53:33.399479 | orchestrator | Pausing for 60 seconds 2025-05-31 15:53:33.399623 | orchestrator | changed: [testbed-manager] 2025-05-31 15:53:33.399641 | orchestrator | 2025-05-31 15:53:33.399655 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-05-31 15:53:38.325136 | orchestrator | changed: [testbed-manager] 2025-05-31 15:53:38.325263 | orchestrator | 2025-05-31 15:53:38.325289 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-05-31 15:54:19.921310 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-05-31 15:54:19.921426 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-05-31 15:54:19.921442 | orchestrator | changed: [testbed-manager] 2025-05-31 15:54:19.921456 | orchestrator | 2025-05-31 15:54:19.921468 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-05-31 15:54:25.544328 | orchestrator | changed: [testbed-manager] 2025-05-31 15:54:25.544453 | orchestrator | 2025-05-31 15:54:25.544473 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-05-31 15:54:25.638338 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-05-31 15:54:25.638433 | orchestrator | 2025-05-31 15:54:25.638445 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-31 15:54:25.638456 | orchestrator | 2025-05-31 15:54:25.638466 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-05-31 15:54:25.694797 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:54:25.694892 | orchestrator | 2025-05-31 15:54:25.694906 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 15:54:25.694920 | orchestrator | testbed-manager : ok=111 changed=59 unreachable=0 failed=0 skipped=19 rescued=0 ignored=0 2025-05-31 15:54:25.694932 | orchestrator | 2025-05-31 15:54:25.809243 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-31 15:54:25.809344 | orchestrator | + deactivate 2025-05-31 15:54:25.809392 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-05-31 15:54:25.809407 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-31 15:54:25.809418 | orchestrator | + export PATH 2025-05-31 15:54:25.809429 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-05-31 15:54:25.809442 | orchestrator | + '[' -n '' ']' 2025-05-31 15:54:25.809453 | orchestrator | + hash -r 2025-05-31 15:54:25.809464 | orchestrator | + '[' -n '' ']' 2025-05-31 15:54:25.809476 | orchestrator | + unset VIRTUAL_ENV 2025-05-31 15:54:25.809486 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-05-31 15:54:25.809552 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-05-31 15:54:25.809565 | orchestrator | + unset -f deactivate 2025-05-31 15:54:25.809577 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-05-31 15:54:25.816384 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-31 15:54:25.816439 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-31 15:54:25.816451 | orchestrator | + local max_attempts=60 2025-05-31 15:54:25.816463 | orchestrator | + local name=ceph-ansible 2025-05-31 15:54:25.816475 | orchestrator | + local attempt_num=1 2025-05-31 15:54:25.817517 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-31 15:54:25.849575 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-31 15:54:25.849649 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-31 15:54:25.849663 | orchestrator | + local max_attempts=60 2025-05-31 15:54:25.849677 | orchestrator | + local name=kolla-ansible 2025-05-31 15:54:25.849688 | orchestrator | + local attempt_num=1 2025-05-31 15:54:25.850635 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-31 15:54:25.885640 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-31 15:54:25.885681 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-31 15:54:25.885694 | orchestrator | + local max_attempts=60 2025-05-31 15:54:25.885705 | orchestrator | + local name=osism-ansible 2025-05-31 15:54:25.885716 | orchestrator | + local attempt_num=1 2025-05-31 15:54:25.886515 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-31 15:54:25.917066 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-31 15:54:25.917109 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-31 15:54:25.917121 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-31 15:54:26.558933 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-05-31 15:54:26.788242 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-05-31 15:54:26.788339 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:8.1.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-05-31 15:54:26.788374 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:8.1.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-05-31 15:54:26.788386 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-05-31 15:54:26.788403 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-05-31 15:54:26.788414 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" beat About a minute ago Up About a minute (healthy) 2025-05-31 15:54:26.788424 | orchestrator | manager-conductor-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" conductor About a minute ago Up About a minute (healthy) 2025-05-31 15:54:26.788433 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" flower About a minute ago Up About a minute (healthy) 2025-05-31 15:54:26.788465 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:8.1.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 48 seconds (healthy) 2025-05-31 15:54:26.788475 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" listener About a minute ago Up About a minute (healthy) 2025-05-31 15:54:26.788485 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.6.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-05-31 15:54:26.788528 | orchestrator | manager-netbox-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" netbox About a minute ago Up About a minute (healthy) 2025-05-31 15:54:26.788540 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" openstack About a minute ago Up About a minute (healthy) 2025-05-31 15:54:26.788550 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.1-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-05-31 15:54:26.788559 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" watchdog About a minute ago Up About a minute (healthy) 2025-05-31 15:54:26.788569 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:8.1.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-05-31 15:54:26.788579 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:8.1.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-05-31 15:54:26.788588 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- sl…" osismclient About a minute ago Up About a minute (healthy) 2025-05-31 15:54:26.794445 | orchestrator | + docker compose --project-directory /opt/netbox ps 2025-05-31 15:54:26.927617 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-05-31 15:54:26.927714 | orchestrator | netbox-netbox-1 registry.osism.tech/osism/netbox:v4.1.7 "/usr/bin/tini -- /o…" netbox 8 minutes ago Up 7 minutes (healthy) 2025-05-31 15:54:26.927729 | orchestrator | netbox-netbox-worker-1 registry.osism.tech/osism/netbox:v4.1.7 "/opt/netbox/venv/bi…" netbox-worker 8 minutes ago Up 3 minutes (healthy) 2025-05-31 15:54:26.927742 | orchestrator | netbox-postgres-1 registry.osism.tech/dockerhub/library/postgres:16.6-alpine "docker-entrypoint.s…" postgres 8 minutes ago Up 8 minutes (healthy) 5432/tcp 2025-05-31 15:54:26.927755 | orchestrator | netbox-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.1-alpine "docker-entrypoint.s…" redis 8 minutes ago Up 8 minutes (healthy) 6379/tcp 2025-05-31 15:54:26.934547 | orchestrator | ++ semver 8.1.0 7.0.0 2025-05-31 15:54:26.991952 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-31 15:54:26.992041 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-05-31 15:54:26.996202 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-05-31 15:54:28.517370 | orchestrator | 2025-05-31 15:54:28 | INFO  | Task 0caf1806-67ec-4827-a8f2-e2f820b20918 (resolvconf) was prepared for execution. 2025-05-31 15:54:28.517475 | orchestrator | 2025-05-31 15:54:28 | INFO  | It takes a moment until task 0caf1806-67ec-4827-a8f2-e2f820b20918 (resolvconf) has been started and output is visible here. 2025-05-31 15:54:31.452060 | orchestrator | 2025-05-31 15:54:31.452448 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-05-31 15:54:31.453403 | orchestrator | 2025-05-31 15:54:31.453649 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-31 15:54:31.454842 | orchestrator | Saturday 31 May 2025 15:54:31 +0000 (0:00:00.083) 0:00:00.083 ********** 2025-05-31 15:54:35.371026 | orchestrator | ok: [testbed-manager] 2025-05-31 15:54:35.371378 | orchestrator | 2025-05-31 15:54:35.372455 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-31 15:54:35.373052 | orchestrator | Saturday 31 May 2025 15:54:35 +0000 (0:00:03.921) 0:00:04.005 ********** 2025-05-31 15:54:35.418736 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:54:35.419013 | orchestrator | 2025-05-31 15:54:35.419582 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-31 15:54:35.420888 | orchestrator | Saturday 31 May 2025 15:54:35 +0000 (0:00:00.048) 0:00:04.053 ********** 2025-05-31 15:54:35.489072 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-05-31 15:54:35.489924 | orchestrator | 2025-05-31 15:54:35.490355 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-31 15:54:35.491240 | orchestrator | Saturday 31 May 2025 15:54:35 +0000 (0:00:00.070) 0:00:04.124 ********** 2025-05-31 15:54:35.552832 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-05-31 15:54:35.553407 | orchestrator | 2025-05-31 15:54:35.554124 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-31 15:54:35.554228 | orchestrator | Saturday 31 May 2025 15:54:35 +0000 (0:00:00.063) 0:00:04.188 ********** 2025-05-31 15:54:36.566471 | orchestrator | ok: [testbed-manager] 2025-05-31 15:54:36.566772 | orchestrator | 2025-05-31 15:54:36.567521 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-31 15:54:36.567988 | orchestrator | Saturday 31 May 2025 15:54:36 +0000 (0:00:01.011) 0:00:05.199 ********** 2025-05-31 15:54:36.619220 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:54:36.619986 | orchestrator | 2025-05-31 15:54:36.620482 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-31 15:54:36.621349 | orchestrator | Saturday 31 May 2025 15:54:36 +0000 (0:00:00.054) 0:00:05.254 ********** 2025-05-31 15:54:37.134445 | orchestrator | ok: [testbed-manager] 2025-05-31 15:54:37.135382 | orchestrator | 2025-05-31 15:54:37.135636 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-31 15:54:37.136238 | orchestrator | Saturday 31 May 2025 15:54:37 +0000 (0:00:00.514) 0:00:05.769 ********** 2025-05-31 15:54:37.211101 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:54:37.211549 | orchestrator | 2025-05-31 15:54:37.212410 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-31 15:54:37.213154 | orchestrator | Saturday 31 May 2025 15:54:37 +0000 (0:00:00.077) 0:00:05.846 ********** 2025-05-31 15:54:37.748308 | orchestrator | changed: [testbed-manager] 2025-05-31 15:54:37.748582 | orchestrator | 2025-05-31 15:54:37.749059 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-31 15:54:37.749568 | orchestrator | Saturday 31 May 2025 15:54:37 +0000 (0:00:00.536) 0:00:06.382 ********** 2025-05-31 15:54:38.757911 | orchestrator | changed: [testbed-manager] 2025-05-31 15:54:38.758672 | orchestrator | 2025-05-31 15:54:38.759479 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-31 15:54:38.760398 | orchestrator | Saturday 31 May 2025 15:54:38 +0000 (0:00:01.008) 0:00:07.391 ********** 2025-05-31 15:54:39.680478 | orchestrator | ok: [testbed-manager] 2025-05-31 15:54:39.680959 | orchestrator | 2025-05-31 15:54:39.682132 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-31 15:54:39.682162 | orchestrator | Saturday 31 May 2025 15:54:39 +0000 (0:00:00.922) 0:00:08.314 ********** 2025-05-31 15:54:39.757010 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-05-31 15:54:39.757247 | orchestrator | 2025-05-31 15:54:39.757772 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-31 15:54:39.758229 | orchestrator | Saturday 31 May 2025 15:54:39 +0000 (0:00:00.077) 0:00:08.391 ********** 2025-05-31 15:54:40.866188 | orchestrator | changed: [testbed-manager] 2025-05-31 15:54:40.867127 | orchestrator | 2025-05-31 15:54:40.867471 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 15:54:40.868023 | orchestrator | 2025-05-31 15:54:40 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-31 15:54:40.868135 | orchestrator | 2025-05-31 15:54:40 | INFO  | Please wait and do not abort execution. 2025-05-31 15:54:40.869283 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-31 15:54:40.869896 | orchestrator | 2025-05-31 15:54:40.871050 | orchestrator | Saturday 31 May 2025 15:54:40 +0000 (0:00:01.107) 0:00:09.499 ********** 2025-05-31 15:54:40.871431 | orchestrator | =============================================================================== 2025-05-31 15:54:40.872358 | orchestrator | Gathering Facts --------------------------------------------------------- 3.92s 2025-05-31 15:54:40.872766 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.11s 2025-05-31 15:54:40.873141 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.01s 2025-05-31 15:54:40.873629 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.01s 2025-05-31 15:54:40.874071 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.92s 2025-05-31 15:54:40.874558 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.54s 2025-05-31 15:54:40.874975 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.51s 2025-05-31 15:54:40.875575 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2025-05-31 15:54:40.875900 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-05-31 15:54:40.876339 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.07s 2025-05-31 15:54:40.876667 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.06s 2025-05-31 15:54:40.877078 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.05s 2025-05-31 15:54:40.877399 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.05s 2025-05-31 15:54:41.211535 | orchestrator | + osism apply sshconfig 2025-05-31 15:54:42.577764 | orchestrator | 2025-05-31 15:54:42 | INFO  | Task 5b25b099-aaef-4e99-8d16-6237d9e59bb8 (sshconfig) was prepared for execution. 2025-05-31 15:54:42.577872 | orchestrator | 2025-05-31 15:54:42 | INFO  | It takes a moment until task 5b25b099-aaef-4e99-8d16-6237d9e59bb8 (sshconfig) has been started and output is visible here. 2025-05-31 15:54:45.488178 | orchestrator | 2025-05-31 15:54:45.488583 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-05-31 15:54:45.488962 | orchestrator | 2025-05-31 15:54:45.489669 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-05-31 15:54:45.490273 | orchestrator | Saturday 31 May 2025 15:54:45 +0000 (0:00:00.109) 0:00:00.109 ********** 2025-05-31 15:54:46.012601 | orchestrator | ok: [testbed-manager] 2025-05-31 15:54:46.012706 | orchestrator | 2025-05-31 15:54:46.012723 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-05-31 15:54:46.013273 | orchestrator | Saturday 31 May 2025 15:54:46 +0000 (0:00:00.525) 0:00:00.634 ********** 2025-05-31 15:54:46.500441 | orchestrator | changed: [testbed-manager] 2025-05-31 15:54:46.500610 | orchestrator | 2025-05-31 15:54:46.501021 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-05-31 15:54:46.501450 | orchestrator | Saturday 31 May 2025 15:54:46 +0000 (0:00:00.487) 0:00:01.122 ********** 2025-05-31 15:54:51.863284 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-05-31 15:54:51.863559 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-05-31 15:54:51.864156 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-05-31 15:54:51.864627 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-05-31 15:54:51.865216 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-05-31 15:54:51.865685 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-05-31 15:54:51.868222 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-05-31 15:54:51.868248 | orchestrator | 2025-05-31 15:54:51.869318 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-05-31 15:54:51.870009 | orchestrator | Saturday 31 May 2025 15:54:51 +0000 (0:00:05.362) 0:00:06.485 ********** 2025-05-31 15:54:51.937589 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:54:51.937769 | orchestrator | 2025-05-31 15:54:51.938551 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-05-31 15:54:51.938883 | orchestrator | Saturday 31 May 2025 15:54:51 +0000 (0:00:00.076) 0:00:06.561 ********** 2025-05-31 15:54:52.517794 | orchestrator | changed: [testbed-manager] 2025-05-31 15:54:52.518310 | orchestrator | 2025-05-31 15:54:52.518995 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 15:54:52.519293 | orchestrator | 2025-05-31 15:54:52 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-31 15:54:52.519593 | orchestrator | 2025-05-31 15:54:52 | INFO  | Please wait and do not abort execution. 2025-05-31 15:54:52.520529 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-31 15:54:52.521025 | orchestrator | 2025-05-31 15:54:52.521566 | orchestrator | Saturday 31 May 2025 15:54:52 +0000 (0:00:00.579) 0:00:07.141 ********** 2025-05-31 15:54:52.522127 | orchestrator | =============================================================================== 2025-05-31 15:54:52.522463 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.36s 2025-05-31 15:54:52.522901 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.58s 2025-05-31 15:54:52.523224 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.53s 2025-05-31 15:54:52.523562 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.49s 2025-05-31 15:54:52.523894 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2025-05-31 15:54:52.865055 | orchestrator | + osism apply known-hosts 2025-05-31 15:54:54.227907 | orchestrator | 2025-05-31 15:54:54 | INFO  | Task b4981702-d6ca-48bf-bc94-38b1723b8c41 (known-hosts) was prepared for execution. 2025-05-31 15:54:54.228008 | orchestrator | 2025-05-31 15:54:54 | INFO  | It takes a moment until task b4981702-d6ca-48bf-bc94-38b1723b8c41 (known-hosts) has been started and output is visible here. 2025-05-31 15:54:57.087409 | orchestrator | 2025-05-31 15:54:57.087875 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-05-31 15:54:57.088464 | orchestrator | 2025-05-31 15:54:57.089469 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-05-31 15:54:57.089805 | orchestrator | Saturday 31 May 2025 15:54:57 +0000 (0:00:00.086) 0:00:00.086 ********** 2025-05-31 15:55:02.625266 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-31 15:55:02.625374 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-31 15:55:02.626348 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-31 15:55:02.626684 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-31 15:55:02.627603 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-31 15:55:02.629178 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-31 15:55:02.629542 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-31 15:55:02.629891 | orchestrator | 2025-05-31 15:55:02.630441 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-05-31 15:55:02.630830 | orchestrator | Saturday 31 May 2025 15:55:02 +0000 (0:00:05.537) 0:00:05.623 ********** 2025-05-31 15:55:02.781383 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-31 15:55:02.781588 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-31 15:55:02.783336 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-31 15:55:02.783384 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-31 15:55:02.784457 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-31 15:55:02.785324 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-31 15:55:02.785536 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-31 15:55:02.786562 | orchestrator | 2025-05-31 15:55:02.787056 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-31 15:55:02.787707 | orchestrator | Saturday 31 May 2025 15:55:02 +0000 (0:00:00.157) 0:00:05.781 ********** 2025-05-31 15:55:03.933000 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJLhZN2gxYSggD/4pNS61iDnFI3LgLnJcNeKWfBg4qnu) 2025-05-31 15:55:03.933136 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCCjSUUA80kXmceyL0YcNvUM57tzroANSr57ZRGpsUwWKewpM8Z9fksPsIqwZMhuBepL5E/T0NUIMP42nlsm7B+PwM96YQt9DxO3z2tFDNsD5sUOUi/uaA71KawAiFI8MTa6n6+OMVpPdx8gBjFWa6nkoHjKzZn68H214qFXKJPLfNUH8mUjDHSJ18cRkkkHum+KwqZauIdBvElzaFHRvzWM+GAo+ErPguMMnQeP2GkHe35hhdkQAEtmf2MiRh5rY8WnUz1rLm3nHdn/MmvJ++93WuJyjD8O5M4VjCRAJPgh11wd9I0tNdoVoMFZ/dNSxJSRZT1AsdEjdvTopUDKjsck7HG7h67zOnSvUT9+8c0c8CttwVJl64e0AQ1PvEF7szEoj7/TIygAtLOkrlRAiLsAI1SZYaGPsB5AUvIr+wQ+a+/MJ0Th6leWuD5iu3AWUlFWp9/H75CXJAOUT1Vpa1yKK86vbCkjAyJkRLgG3DomlYX95DR4zUFbKKg6YvKuV0=) 2025-05-31 15:55:03.933172 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMuqiOdyjnV/YyjTAvP+8xszyztu8Im8wPhQ673E65OWWTtkvyot5gIgtlLR4KFQcfNNxlE4BdHglrVVdFpa1B8=) 2025-05-31 15:55:03.933575 | orchestrator | 2025-05-31 15:55:03.934173 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-31 15:55:03.934735 | orchestrator | Saturday 31 May 2025 15:55:03 +0000 (0:00:01.149) 0:00:06.930 ********** 2025-05-31 15:55:04.958956 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQChNlVbD7MyhojQ30Vbd/oBOxnCR/k8BgMs7zIogxzDr2ZUi0wgs/O/5cgJ1oH3ZMBAiFTX05Ys1yw9GmBTxE4NdguON/xXwojaV1ZuSYOqSupG5DNXsaV4wr44To3c423evBPRx9S6nkrsmT4gXo+HcI8fqI0yXjKARBifLw5cENFS94Xdc7HbrBh8XtGWZvmHhwx7PC3/LBEpfpu3l94sQ7KtlrVgUplUN8Zjush+KQoMk0NwHkTIh+0cHF94JjJmm4MVPMcdagQl3G8V3sITBN+bq2B5BBPZjMAQFI5GnuN8WCYcymIXQsaQdund3UDTQJaEcSBlr9i2Lj7XInZOb7ME1D+kOWDr6+D4YPuBXantyGIw8RoASrvgZwCF39rYFr1Hp3TXkK/ONyA+NBW6J48EqXHR/Wm1UlwmuDjVtWmboJ1Ly5CT3yLWNZAnoMaxMaDE1esMPTqyK36ZE1d5lHHehq2Ki3SV329peffA2uIN79cZ6z8W7BILqD2ee8c=) 2025-05-31 15:55:04.959226 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHfeb63kX34h0z9Idyj/wM0uHFRj2HD9504BEZn0tSEoAsVZrySj8g/KdUdCoWoUf8W21nqCilv0VKh78LUAm5Q=) 2025-05-31 15:55:04.960762 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA6c1yEtVvTpOMyjBXxZhWxptbrxaVBbDbe7aB5WdD6X) 2025-05-31 15:55:04.961170 | orchestrator | 2025-05-31 15:55:04.961753 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-31 15:55:04.962230 | orchestrator | Saturday 31 May 2025 15:55:04 +0000 (0:00:01.026) 0:00:07.957 ********** 2025-05-31 15:55:05.926936 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCie8k9aY38my08v751GJGa+DkH8AjQ3JntVsU3lhIPxdCUtP7rXTeOBu7Lvm/mDu65PX26wxmZ9fL4azYP7sBsgIV3iyGhnEAvFjH0yddIH5l0Q6nM1TTALDlDeS/Vq66FgYUbw0MWbtUoxOhAhNWg8FDyAnF0epwudSnGf5/AlYc+GL1bK/dVkFfcYJRoE7T82gPbbI4L/f7OoLUmSdc/q1STan0fJq5tlL5mR6Q63pJPirx4eqdOd8/zXT9m8NbUvNABSVNf4XER8wKE9ooUsKP0dkiiDA2bZ24AoXLqJq/v0u2VNkGlYQnBvjS8XdgzsEVaSzC5qbtUj8V8W9mxWfoNJc3bTNKnbNBSURNj3B3P2O/kL4TpGPguBU7SnS8ptVoaO2wXb8u5/ypTOATIvkkL6wnyQdrMZcJA8zODyMyN8kIdDsMBgor030+Pv3LwWPdB6XAKfzCelzAkyGHgLwlv8/zK4CCXsWpdpXA8dKV0AA+/Uw8HTmz5S/yOhNc=) 2025-05-31 15:55:05.929054 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLgn+236bF8BR2Sj6WcZMHveNJIa7/UtE9D2NekUl+MhCExzaP509TX4S5+9D+rCHEnJwYuaFQpvF9S8WgrqiO4=) 2025-05-31 15:55:05.929085 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKPf0pmzWZmCJkNqFy8VTkZ03gwocA/IQyanl5qFzZdT) 2025-05-31 15:55:05.930194 | orchestrator | 2025-05-31 15:55:05.930893 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-31 15:55:05.932572 | orchestrator | Saturday 31 May 2025 15:55:05 +0000 (0:00:00.967) 0:00:08.924 ********** 2025-05-31 15:55:06.912976 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCgs/vJkYjNJWn7KG1Zq21C8cb9Apli4p/ioMOlx4UcgXDGJcjhKtOCXvW31xRisvSeedgz50F7pHuFFKfmBkY+t0KOjWzd+Nvn343oCqyld4qJxcbavRrv9LdNpxr6OY5L1boJcHezZYnGFT2pgKSWL4cP0YW4FEdezSk6iMUHgYCpaxBmYPBE8vjYfhntKf1n/xUaKKUMKB8rmf3Mkrx+AYK3EijW3LKJf8j+2qHD1lOf02t19jtvB1fidlNeALQplSIv787fPkHsWclagA/ydGke5gvXG+coNjSiIS23FkAySjOYQOXouKioTSi2RFq99qQsb2vNtg7v5ZZgXz41cra6YAtmPwsEhDZZaOQdN3bg6NdBK5xylSqcuBAHAJ8eV9ju+Sxtbs+jwdAfA22JPGT9zE+BvG8MLFvQ7uY1R6qcQDeq6uUuNHwTNAJSrNXM/0V2GjhSxeVVN4U4PPsdP+v/1xXVFdXdt8tavaX79xzFLNVWPQmkBcr0WR8B5pc=) 2025-05-31 15:55:06.913150 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIzBuMN8JhZRVgIQ4OXswMYbwwGEaMxzhx+n6qSYd7JXZu2pc4iw0WDp5v2fht0Iw0ABgxYSc3eDnrEXtv08E4A=) 2025-05-31 15:55:06.914091 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICeOxWhyGsw+t/juJXkBUpE+4mvpC80UdchJ4Rb284dD) 2025-05-31 15:55:06.914955 | orchestrator | 2025-05-31 15:55:06.915651 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-31 15:55:06.916629 | orchestrator | Saturday 31 May 2025 15:55:06 +0000 (0:00:00.985) 0:00:09.910 ********** 2025-05-31 15:55:07.956379 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN1miFBSzr8rBJQt6xJfvVnO5OmR0HKbJJTaDnkxcvaMV0SnlpQO67emaanqN96Zif60kHIxjpdz3Ayi7PGU7JY=) 2025-05-31 15:55:07.956569 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOcOXxrcmlOAlIS1itylw0hHfusrulON2v+nBG0CwOtO) 2025-05-31 15:55:07.956593 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAWfwvYiLPlby5lxbsosEUsUBsJglXDDDjCZQPZv1V+FTDKn2q0/vIyqmS96KE+8YscVDAapEG923GoD6+wx3RKoz1xKJdvwL/gFGTIjay00e4jkiN2t+DSIYcS4U+XPxd6US205Gw6CNT4O8s3qry8qonWgTYD0T3TWXZolw2b7fScN1gKqHGMs+C+GSxrfN6AitoVnW718fCSVM/nXYpw67AlTiaeIe8h0jQ0bi3XBje02lcFF3Hu9x9yC9hxGy9o+9nCK0kHFl3evez5tRxNI7jzAIzIF9VGu+gQ2NcCT+ryzhULvPQW/A7MqYpQUvOlp+nNJuAK4f1ydj63/5zALJqIWWlpqIpDSRd+YaJ+Io9oHX7/8tOAFQF9xwYsAJLAXvGA1YvI5fY27DskhGksJd3fI3uiZ7dyS45tJl89LFle5AD4tIVfbuLOl2Ry7sSukKOaZBgxEyeLVY9NUItItwoisnFFyBisDrM6nvnGnnGQ8dPi7aIR0a3lypjMTE=) 2025-05-31 15:55:07.957162 | orchestrator | 2025-05-31 15:55:07.958980 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-31 15:55:07.959008 | orchestrator | Saturday 31 May 2025 15:55:07 +0000 (0:00:01.040) 0:00:10.951 ********** 2025-05-31 15:55:08.935461 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDG3zs/JUomp5/IUHisQBpwAjyMax+uYldmvqZsiyStw3oPTAkghP5N7FmGqKV991NBZgW6JsusLuOl9RuEJmNDFN4WYV0B4KjJAG6B6QkFIAkLlSS91D7yHzatfqBSorY30wZvldgep/GXSGUGWaPdDx7yxiGol+6rgVdFGNbVR17Sd76gJeFuW+qHDts33vbKVzElPLNonImsLfk+c0cmR70LvF3h+ZzAE3j06DsdXl4cVPFut1ShIF7Cr0fvNlgoL/n6pc8W3aHdjooJeGZC2faR/LaSWjfvztzYo0dqXq6BlrMrCnCSM2MHL/OvWXL2W51U+slC00M509YQ3BOwE+bITDQZxk2tv2vR8ovZTGUQm5soUCcbhyiXiHHhzoing7RHWAOzMxKWbU/StK6ZK5LmdLk76+cnfdCI4y5Yf4ST4X73gzSR/M+d5v4eW9iJN09um0u8/IagSd5gFmCSZbnCgPgTfWnyKjp/tLHdkBBJ33gwtPNUYpyafQfCmJU=) 2025-05-31 15:55:08.935667 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA2oI1ZA2AuBImrxkj3J4VvL06mG16O8hrINFzooxa61nOUxygvbB2cebqViOZ9vDHhhpwcpgYE6ZY5pNLGu5ZU=) 2025-05-31 15:55:08.935688 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKoQQwSFY0XyMNzXAm3qckRHrGTQvyOhLyAG7UCuNDB2) 2025-05-31 15:55:08.935771 | orchestrator | 2025-05-31 15:55:08.936967 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-31 15:55:08.937995 | orchestrator | Saturday 31 May 2025 15:55:08 +0000 (0:00:00.983) 0:00:11.934 ********** 2025-05-31 15:55:09.957003 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBH7E0Z6wUpKBCrVUBXwHTpvGGSpxdQn2Zm/0dEH/zE9B2QJHXbjTNewsLJFnTOGQMhFIYo1dJvyLgfgdAuAEDvM=) 2025-05-31 15:55:09.957586 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGxIc0TYy+vZ9ousIsaE3qAJTTKhYrXWAISn1O4unsXX) 2025-05-31 15:55:09.958208 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCKRGyo85UhnVxRpnqob4m6FuDpKlWh4Ub/GU36iMJ5+7rCxGl7fiAzAWwhPNebY+zv+Ul9xHnX26G+tyYHDxgPY5oYuHfjgYrnGHaLWb9baibMqi01K8/61B2+APHV41rkZ7uHgaWCWAGV6nhstxEC9DCIMF9dKC9mlyn0zkLiM0xEpEam3eT7gRmXB3wqYOt2sJva4hgVVREHjkTvoqCC7RGYydzkapogkftCcjmCqCN/LY0XRkZEv+Z7WuenEzwaC42LlnUWo1YDHxh72vOICO0tr6ELV3/bqUHI1r77bVEVX/wsyzs6MjGCxTosCsdDQJ6NdWHs3nSkVLdAps+VnY3VTIlvSweqkXXKFfV4z2Wtzueen9/C8T1DmlWNAJppFO+v1ecKYM1qB5gSTe45Ds6W9avNW5UPVS90G+NBXR40y8zVciSHNmBfukEWFzDfWdU8uJi9eCHvFwhWRYNrlwVqy4NLolzqsG4wYzgCUTOKpbWzaqPvkEPRalVgRDk=) 2025-05-31 15:55:09.958973 | orchestrator | 2025-05-31 15:55:09.959529 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-05-31 15:55:09.960027 | orchestrator | Saturday 31 May 2025 15:55:09 +0000 (0:00:01.021) 0:00:12.956 ********** 2025-05-31 15:55:15.127757 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-31 15:55:15.128786 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-31 15:55:15.129025 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-31 15:55:15.129839 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-31 15:55:15.131814 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-31 15:55:15.131939 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-31 15:55:15.132446 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-31 15:55:15.133066 | orchestrator | 2025-05-31 15:55:15.133609 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-05-31 15:55:15.135068 | orchestrator | Saturday 31 May 2025 15:55:15 +0000 (0:00:05.170) 0:00:18.127 ********** 2025-05-31 15:55:15.292837 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-31 15:55:15.293024 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-31 15:55:15.293507 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-31 15:55:15.295387 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-31 15:55:15.295635 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-31 15:55:15.295997 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-31 15:55:15.296503 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-31 15:55:15.296826 | orchestrator | 2025-05-31 15:55:15.297258 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-31 15:55:15.297619 | orchestrator | Saturday 31 May 2025 15:55:15 +0000 (0:00:00.166) 0:00:18.293 ********** 2025-05-31 15:55:16.333640 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJLhZN2gxYSggD/4pNS61iDnFI3LgLnJcNeKWfBg4qnu) 2025-05-31 15:55:16.334308 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCCjSUUA80kXmceyL0YcNvUM57tzroANSr57ZRGpsUwWKewpM8Z9fksPsIqwZMhuBepL5E/T0NUIMP42nlsm7B+PwM96YQt9DxO3z2tFDNsD5sUOUi/uaA71KawAiFI8MTa6n6+OMVpPdx8gBjFWa6nkoHjKzZn68H214qFXKJPLfNUH8mUjDHSJ18cRkkkHum+KwqZauIdBvElzaFHRvzWM+GAo+ErPguMMnQeP2GkHe35hhdkQAEtmf2MiRh5rY8WnUz1rLm3nHdn/MmvJ++93WuJyjD8O5M4VjCRAJPgh11wd9I0tNdoVoMFZ/dNSxJSRZT1AsdEjdvTopUDKjsck7HG7h67zOnSvUT9+8c0c8CttwVJl64e0AQ1PvEF7szEoj7/TIygAtLOkrlRAiLsAI1SZYaGPsB5AUvIr+wQ+a+/MJ0Th6leWuD5iu3AWUlFWp9/H75CXJAOUT1Vpa1yKK86vbCkjAyJkRLgG3DomlYX95DR4zUFbKKg6YvKuV0=) 2025-05-31 15:55:16.336634 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMuqiOdyjnV/YyjTAvP+8xszyztu8Im8wPhQ673E65OWWTtkvyot5gIgtlLR4KFQcfNNxlE4BdHglrVVdFpa1B8=) 2025-05-31 15:55:16.337370 | orchestrator | 2025-05-31 15:55:16.338202 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-31 15:55:16.338697 | orchestrator | Saturday 31 May 2025 15:55:16 +0000 (0:00:01.039) 0:00:19.332 ********** 2025-05-31 15:55:17.312684 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA6c1yEtVvTpOMyjBXxZhWxptbrxaVBbDbe7aB5WdD6X) 2025-05-31 15:55:17.312772 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQChNlVbD7MyhojQ30Vbd/oBOxnCR/k8BgMs7zIogxzDr2ZUi0wgs/O/5cgJ1oH3ZMBAiFTX05Ys1yw9GmBTxE4NdguON/xXwojaV1ZuSYOqSupG5DNXsaV4wr44To3c423evBPRx9S6nkrsmT4gXo+HcI8fqI0yXjKARBifLw5cENFS94Xdc7HbrBh8XtGWZvmHhwx7PC3/LBEpfpu3l94sQ7KtlrVgUplUN8Zjush+KQoMk0NwHkTIh+0cHF94JjJmm4MVPMcdagQl3G8V3sITBN+bq2B5BBPZjMAQFI5GnuN8WCYcymIXQsaQdund3UDTQJaEcSBlr9i2Lj7XInZOb7ME1D+kOWDr6+D4YPuBXantyGIw8RoASrvgZwCF39rYFr1Hp3TXkK/ONyA+NBW6J48EqXHR/Wm1UlwmuDjVtWmboJ1Ly5CT3yLWNZAnoMaxMaDE1esMPTqyK36ZE1d5lHHehq2Ki3SV329peffA2uIN79cZ6z8W7BILqD2ee8c=) 2025-05-31 15:55:17.313589 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHfeb63kX34h0z9Idyj/wM0uHFRj2HD9504BEZn0tSEoAsVZrySj8g/KdUdCoWoUf8W21nqCilv0VKh78LUAm5Q=) 2025-05-31 15:55:17.314052 | orchestrator | 2025-05-31 15:55:17.314318 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-31 15:55:17.317059 | orchestrator | Saturday 31 May 2025 15:55:17 +0000 (0:00:00.979) 0:00:20.312 ********** 2025-05-31 15:55:18.335675 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCie8k9aY38my08v751GJGa+DkH8AjQ3JntVsU3lhIPxdCUtP7rXTeOBu7Lvm/mDu65PX26wxmZ9fL4azYP7sBsgIV3iyGhnEAvFjH0yddIH5l0Q6nM1TTALDlDeS/Vq66FgYUbw0MWbtUoxOhAhNWg8FDyAnF0epwudSnGf5/AlYc+GL1bK/dVkFfcYJRoE7T82gPbbI4L/f7OoLUmSdc/q1STan0fJq5tlL5mR6Q63pJPirx4eqdOd8/zXT9m8NbUvNABSVNf4XER8wKE9ooUsKP0dkiiDA2bZ24AoXLqJq/v0u2VNkGlYQnBvjS8XdgzsEVaSzC5qbtUj8V8W9mxWfoNJc3bTNKnbNBSURNj3B3P2O/kL4TpGPguBU7SnS8ptVoaO2wXb8u5/ypTOATIvkkL6wnyQdrMZcJA8zODyMyN8kIdDsMBgor030+Pv3LwWPdB6XAKfzCelzAkyGHgLwlv8/zK4CCXsWpdpXA8dKV0AA+/Uw8HTmz5S/yOhNc=) 2025-05-31 15:55:18.336103 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLgn+236bF8BR2Sj6WcZMHveNJIa7/UtE9D2NekUl+MhCExzaP509TX4S5+9D+rCHEnJwYuaFQpvF9S8WgrqiO4=) 2025-05-31 15:55:18.337165 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKPf0pmzWZmCJkNqFy8VTkZ03gwocA/IQyanl5qFzZdT) 2025-05-31 15:55:18.337666 | orchestrator | 2025-05-31 15:55:18.338630 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-31 15:55:18.340966 | orchestrator | Saturday 31 May 2025 15:55:18 +0000 (0:00:01.021) 0:00:21.333 ********** 2025-05-31 15:55:19.326830 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICeOxWhyGsw+t/juJXkBUpE+4mvpC80UdchJ4Rb284dD) 2025-05-31 15:55:19.327799 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCgs/vJkYjNJWn7KG1Zq21C8cb9Apli4p/ioMOlx4UcgXDGJcjhKtOCXvW31xRisvSeedgz50F7pHuFFKfmBkY+t0KOjWzd+Nvn343oCqyld4qJxcbavRrv9LdNpxr6OY5L1boJcHezZYnGFT2pgKSWL4cP0YW4FEdezSk6iMUHgYCpaxBmYPBE8vjYfhntKf1n/xUaKKUMKB8rmf3Mkrx+AYK3EijW3LKJf8j+2qHD1lOf02t19jtvB1fidlNeALQplSIv787fPkHsWclagA/ydGke5gvXG+coNjSiIS23FkAySjOYQOXouKioTSi2RFq99qQsb2vNtg7v5ZZgXz41cra6YAtmPwsEhDZZaOQdN3bg6NdBK5xylSqcuBAHAJ8eV9ju+Sxtbs+jwdAfA22JPGT9zE+BvG8MLFvQ7uY1R6qcQDeq6uUuNHwTNAJSrNXM/0V2GjhSxeVVN4U4PPsdP+v/1xXVFdXdt8tavaX79xzFLNVWPQmkBcr0WR8B5pc=) 2025-05-31 15:55:19.329337 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIzBuMN8JhZRVgIQ4OXswMYbwwGEaMxzhx+n6qSYd7JXZu2pc4iw0WDp5v2fht0Iw0ABgxYSc3eDnrEXtv08E4A=) 2025-05-31 15:55:19.329364 | orchestrator | 2025-05-31 15:55:19.330166 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-31 15:55:19.330669 | orchestrator | Saturday 31 May 2025 15:55:19 +0000 (0:00:00.992) 0:00:22.326 ********** 2025-05-31 15:55:20.346587 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOcOXxrcmlOAlIS1itylw0hHfusrulON2v+nBG0CwOtO) 2025-05-31 15:55:20.347092 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAWfwvYiLPlby5lxbsosEUsUBsJglXDDDjCZQPZv1V+FTDKn2q0/vIyqmS96KE+8YscVDAapEG923GoD6+wx3RKoz1xKJdvwL/gFGTIjay00e4jkiN2t+DSIYcS4U+XPxd6US205Gw6CNT4O8s3qry8qonWgTYD0T3TWXZolw2b7fScN1gKqHGMs+C+GSxrfN6AitoVnW718fCSVM/nXYpw67AlTiaeIe8h0jQ0bi3XBje02lcFF3Hu9x9yC9hxGy9o+9nCK0kHFl3evez5tRxNI7jzAIzIF9VGu+gQ2NcCT+ryzhULvPQW/A7MqYpQUvOlp+nNJuAK4f1ydj63/5zALJqIWWlpqIpDSRd+YaJ+Io9oHX7/8tOAFQF9xwYsAJLAXvGA1YvI5fY27DskhGksJd3fI3uiZ7dyS45tJl89LFle5AD4tIVfbuLOl2Ry7sSukKOaZBgxEyeLVY9NUItItwoisnFFyBisDrM6nvnGnnGQ8dPi7aIR0a3lypjMTE=) 2025-05-31 15:55:20.347808 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN1miFBSzr8rBJQt6xJfvVnO5OmR0HKbJJTaDnkxcvaMV0SnlpQO67emaanqN96Zif60kHIxjpdz3Ayi7PGU7JY=) 2025-05-31 15:55:20.348501 | orchestrator | 2025-05-31 15:55:20.348912 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-31 15:55:20.349571 | orchestrator | Saturday 31 May 2025 15:55:20 +0000 (0:00:01.019) 0:00:23.345 ********** 2025-05-31 15:55:21.360506 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA2oI1ZA2AuBImrxkj3J4VvL06mG16O8hrINFzooxa61nOUxygvbB2cebqViOZ9vDHhhpwcpgYE6ZY5pNLGu5ZU=) 2025-05-31 15:55:21.360657 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDG3zs/JUomp5/IUHisQBpwAjyMax+uYldmvqZsiyStw3oPTAkghP5N7FmGqKV991NBZgW6JsusLuOl9RuEJmNDFN4WYV0B4KjJAG6B6QkFIAkLlSS91D7yHzatfqBSorY30wZvldgep/GXSGUGWaPdDx7yxiGol+6rgVdFGNbVR17Sd76gJeFuW+qHDts33vbKVzElPLNonImsLfk+c0cmR70LvF3h+ZzAE3j06DsdXl4cVPFut1ShIF7Cr0fvNlgoL/n6pc8W3aHdjooJeGZC2faR/LaSWjfvztzYo0dqXq6BlrMrCnCSM2MHL/OvWXL2W51U+slC00M509YQ3BOwE+bITDQZxk2tv2vR8ovZTGUQm5soUCcbhyiXiHHhzoing7RHWAOzMxKWbU/StK6ZK5LmdLk76+cnfdCI4y5Yf4ST4X73gzSR/M+d5v4eW9iJN09um0u8/IagSd5gFmCSZbnCgPgTfWnyKjp/tLHdkBBJ33gwtPNUYpyafQfCmJU=) 2025-05-31 15:55:21.360841 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKoQQwSFY0XyMNzXAm3qckRHrGTQvyOhLyAG7UCuNDB2) 2025-05-31 15:55:21.361134 | orchestrator | 2025-05-31 15:55:21.361435 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-31 15:55:21.361878 | orchestrator | Saturday 31 May 2025 15:55:21 +0000 (0:00:01.014) 0:00:24.359 ********** 2025-05-31 15:55:22.384525 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCKRGyo85UhnVxRpnqob4m6FuDpKlWh4Ub/GU36iMJ5+7rCxGl7fiAzAWwhPNebY+zv+Ul9xHnX26G+tyYHDxgPY5oYuHfjgYrnGHaLWb9baibMqi01K8/61B2+APHV41rkZ7uHgaWCWAGV6nhstxEC9DCIMF9dKC9mlyn0zkLiM0xEpEam3eT7gRmXB3wqYOt2sJva4hgVVREHjkTvoqCC7RGYydzkapogkftCcjmCqCN/LY0XRkZEv+Z7WuenEzwaC42LlnUWo1YDHxh72vOICO0tr6ELV3/bqUHI1r77bVEVX/wsyzs6MjGCxTosCsdDQJ6NdWHs3nSkVLdAps+VnY3VTIlvSweqkXXKFfV4z2Wtzueen9/C8T1DmlWNAJppFO+v1ecKYM1qB5gSTe45Ds6W9avNW5UPVS90G+NBXR40y8zVciSHNmBfukEWFzDfWdU8uJi9eCHvFwhWRYNrlwVqy4NLolzqsG4wYzgCUTOKpbWzaqPvkEPRalVgRDk=) 2025-05-31 15:55:22.385524 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBH7E0Z6wUpKBCrVUBXwHTpvGGSpxdQn2Zm/0dEH/zE9B2QJHXbjTNewsLJFnTOGQMhFIYo1dJvyLgfgdAuAEDvM=) 2025-05-31 15:55:22.389978 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGxIc0TYy+vZ9ousIsaE3qAJTTKhYrXWAISn1O4unsXX) 2025-05-31 15:55:22.390569 | orchestrator | 2025-05-31 15:55:22.391246 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-05-31 15:55:22.391942 | orchestrator | Saturday 31 May 2025 15:55:22 +0000 (0:00:01.023) 0:00:25.383 ********** 2025-05-31 15:55:22.560712 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-31 15:55:22.560833 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-31 15:55:22.560858 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-31 15:55:22.560869 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-31 15:55:22.560884 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-31 15:55:22.560902 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-31 15:55:22.561384 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-31 15:55:22.561850 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:55:22.562923 | orchestrator | 2025-05-31 15:55:22.562959 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-05-31 15:55:22.563327 | orchestrator | Saturday 31 May 2025 15:55:22 +0000 (0:00:00.174) 0:00:25.558 ********** 2025-05-31 15:55:22.601827 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:55:22.601903 | orchestrator | 2025-05-31 15:55:22.601918 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-05-31 15:55:22.601960 | orchestrator | Saturday 31 May 2025 15:55:22 +0000 (0:00:00.043) 0:00:25.601 ********** 2025-05-31 15:55:22.661889 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:55:22.662752 | orchestrator | 2025-05-31 15:55:22.662807 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-05-31 15:55:22.662989 | orchestrator | Saturday 31 May 2025 15:55:22 +0000 (0:00:00.060) 0:00:25.661 ********** 2025-05-31 15:55:23.290537 | orchestrator | changed: [testbed-manager] 2025-05-31 15:55:23.290676 | orchestrator | 2025-05-31 15:55:23.290692 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 15:55:23.290705 | orchestrator | 2025-05-31 15:55:23 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-31 15:55:23.290719 | orchestrator | 2025-05-31 15:55:23 | INFO  | Please wait and do not abort execution. 2025-05-31 15:55:23.290791 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-31 15:55:23.291588 | orchestrator | 2025-05-31 15:55:23.292345 | orchestrator | Saturday 31 May 2025 15:55:23 +0000 (0:00:00.626) 0:00:26.288 ********** 2025-05-31 15:55:23.294714 | orchestrator | =============================================================================== 2025-05-31 15:55:23.295626 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.54s 2025-05-31 15:55:23.296307 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.17s 2025-05-31 15:55:23.296732 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2025-05-31 15:55:23.298606 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-05-31 15:55:23.298940 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-05-31 15:55:23.299942 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-05-31 15:55:23.300692 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-05-31 15:55:23.301067 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-05-31 15:55:23.301557 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-05-31 15:55:23.302131 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-05-31 15:55:23.302711 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-05-31 15:55:23.303157 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2025-05-31 15:55:23.303634 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2025-05-31 15:55:23.304712 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2025-05-31 15:55:23.305362 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2025-05-31 15:55:23.306142 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.97s 2025-05-31 15:55:23.306452 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.63s 2025-05-31 15:55:23.307248 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2025-05-31 15:55:23.307675 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2025-05-31 15:55:23.308315 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2025-05-31 15:55:23.667854 | orchestrator | + osism apply squid 2025-05-31 15:55:25.010731 | orchestrator | 2025-05-31 15:55:25 | INFO  | Task f26811f8-f275-44fe-936e-a36f17747405 (squid) was prepared for execution. 2025-05-31 15:55:25.010839 | orchestrator | 2025-05-31 15:55:25 | INFO  | It takes a moment until task f26811f8-f275-44fe-936e-a36f17747405 (squid) has been started and output is visible here. 2025-05-31 15:55:27.969250 | orchestrator | 2025-05-31 15:55:27.970135 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-05-31 15:55:27.970658 | orchestrator | 2025-05-31 15:55:27.972416 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-05-31 15:55:27.973370 | orchestrator | Saturday 31 May 2025 15:55:27 +0000 (0:00:00.102) 0:00:00.102 ********** 2025-05-31 15:55:28.055807 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-05-31 15:55:28.057061 | orchestrator | 2025-05-31 15:55:28.057153 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-05-31 15:55:28.058058 | orchestrator | Saturday 31 May 2025 15:55:28 +0000 (0:00:00.089) 0:00:00.192 ********** 2025-05-31 15:55:29.331611 | orchestrator | ok: [testbed-manager] 2025-05-31 15:55:29.331716 | orchestrator | 2025-05-31 15:55:29.331886 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-05-31 15:55:29.332142 | orchestrator | Saturday 31 May 2025 15:55:29 +0000 (0:00:01.273) 0:00:01.466 ********** 2025-05-31 15:55:30.432882 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-05-31 15:55:30.433298 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-05-31 15:55:30.433923 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-05-31 15:55:30.433943 | orchestrator | 2025-05-31 15:55:30.434278 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-05-31 15:55:30.434375 | orchestrator | Saturday 31 May 2025 15:55:30 +0000 (0:00:01.102) 0:00:02.568 ********** 2025-05-31 15:55:31.499833 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-05-31 15:55:31.499940 | orchestrator | 2025-05-31 15:55:31.500449 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-05-31 15:55:31.500974 | orchestrator | Saturday 31 May 2025 15:55:31 +0000 (0:00:01.065) 0:00:03.633 ********** 2025-05-31 15:55:31.839393 | orchestrator | ok: [testbed-manager] 2025-05-31 15:55:31.839650 | orchestrator | 2025-05-31 15:55:31.840533 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-05-31 15:55:31.840942 | orchestrator | Saturday 31 May 2025 15:55:31 +0000 (0:00:00.341) 0:00:03.975 ********** 2025-05-31 15:55:32.767069 | orchestrator | changed: [testbed-manager] 2025-05-31 15:55:32.767265 | orchestrator | 2025-05-31 15:55:32.767892 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-05-31 15:55:32.768923 | orchestrator | Saturday 31 May 2025 15:55:32 +0000 (0:00:00.925) 0:00:04.901 ********** 2025-05-31 15:56:03.798163 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-05-31 15:56:03.798284 | orchestrator | ok: [testbed-manager] 2025-05-31 15:56:03.798348 | orchestrator | 2025-05-31 15:56:03.798429 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-05-31 15:56:03.799089 | orchestrator | Saturday 31 May 2025 15:56:03 +0000 (0:00:31.030) 0:00:35.931 ********** 2025-05-31 15:56:16.291776 | orchestrator | changed: [testbed-manager] 2025-05-31 15:56:16.294013 | orchestrator | 2025-05-31 15:56:16.295701 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-05-31 15:56:16.296179 | orchestrator | Saturday 31 May 2025 15:56:16 +0000 (0:00:12.492) 0:00:48.424 ********** 2025-05-31 15:57:16.374094 | orchestrator | Pausing for 60 seconds 2025-05-31 15:57:16.374244 | orchestrator | changed: [testbed-manager] 2025-05-31 15:57:16.374275 | orchestrator | 2025-05-31 15:57:16.374297 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-05-31 15:57:16.374318 | orchestrator | Saturday 31 May 2025 15:57:16 +0000 (0:01:00.078) 0:01:48.502 ********** 2025-05-31 15:57:16.427811 | orchestrator | ok: [testbed-manager] 2025-05-31 15:57:16.428295 | orchestrator | 2025-05-31 15:57:16.429879 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-05-31 15:57:16.430270 | orchestrator | Saturday 31 May 2025 15:57:16 +0000 (0:00:00.059) 0:01:48.562 ********** 2025-05-31 15:57:17.029488 | orchestrator | changed: [testbed-manager] 2025-05-31 15:57:17.029710 | orchestrator | 2025-05-31 15:57:17.030342 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 15:57:17.030952 | orchestrator | 2025-05-31 15:57:17 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-31 15:57:17.030977 | orchestrator | 2025-05-31 15:57:17 | INFO  | Please wait and do not abort execution. 2025-05-31 15:57:17.031631 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 15:57:17.032137 | orchestrator | 2025-05-31 15:57:17.032575 | orchestrator | Saturday 31 May 2025 15:57:17 +0000 (0:00:00.603) 0:01:49.165 ********** 2025-05-31 15:57:17.032954 | orchestrator | =============================================================================== 2025-05-31 15:57:17.033349 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-05-31 15:57:17.033813 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.03s 2025-05-31 15:57:17.034075 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.49s 2025-05-31 15:57:17.034298 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.27s 2025-05-31 15:57:17.034859 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.10s 2025-05-31 15:57:17.035273 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.07s 2025-05-31 15:57:17.035481 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.93s 2025-05-31 15:57:17.035905 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.60s 2025-05-31 15:57:17.036261 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.34s 2025-05-31 15:57:17.036672 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2025-05-31 15:57:17.036916 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2025-05-31 15:57:17.394289 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-31 15:57:17.394398 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-05-31 15:57:17.399816 | orchestrator | ++ semver 8.1.0 9.0.0 2025-05-31 15:57:17.457155 | orchestrator | + [[ -1 -lt 0 ]] 2025-05-31 15:57:17.457237 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-31 15:57:17.457248 | orchestrator | + sed -i 's|^# \(network_dispatcher_scripts:\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml 2025-05-31 15:57:17.460916 | orchestrator | + sed -i 's|^# \( - src: /opt/configuration/network/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-05-31 15:57:17.465683 | orchestrator | + sed -i 's|^# \( dest: routable.d/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-05-31 15:57:17.469694 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-05-31 15:57:18.870511 | orchestrator | 2025-05-31 15:57:18 | INFO  | Task 2c26b802-6e44-4e7e-b42f-27d929823e76 (operator) was prepared for execution. 2025-05-31 15:57:18.870612 | orchestrator | 2025-05-31 15:57:18 | INFO  | It takes a moment until task 2c26b802-6e44-4e7e-b42f-27d929823e76 (operator) has been started and output is visible here. 2025-05-31 15:57:21.800656 | orchestrator | 2025-05-31 15:57:21.800784 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-05-31 15:57:21.804794 | orchestrator | 2025-05-31 15:57:21.805106 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-31 15:57:21.806890 | orchestrator | Saturday 31 May 2025 15:57:21 +0000 (0:00:00.085) 0:00:00.085 ********** 2025-05-31 15:57:25.150543 | orchestrator | ok: [testbed-node-4] 2025-05-31 15:57:25.150724 | orchestrator | ok: [testbed-node-0] 2025-05-31 15:57:25.150815 | orchestrator | ok: [testbed-node-2] 2025-05-31 15:57:25.150947 | orchestrator | ok: [testbed-node-1] 2025-05-31 15:57:25.152736 | orchestrator | ok: [testbed-node-3] 2025-05-31 15:57:25.152781 | orchestrator | ok: [testbed-node-5] 2025-05-31 15:57:25.153202 | orchestrator | 2025-05-31 15:57:25.153937 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-05-31 15:57:25.154260 | orchestrator | Saturday 31 May 2025 15:57:25 +0000 (0:00:03.349) 0:00:03.434 ********** 2025-05-31 15:57:25.923677 | orchestrator | ok: [testbed-node-4] 2025-05-31 15:57:25.928544 | orchestrator | ok: [testbed-node-2] 2025-05-31 15:57:25.928768 | orchestrator | ok: [testbed-node-5] 2025-05-31 15:57:25.929949 | orchestrator | ok: [testbed-node-3] 2025-05-31 15:57:25.930822 | orchestrator | ok: [testbed-node-0] 2025-05-31 15:57:25.932596 | orchestrator | ok: [testbed-node-1] 2025-05-31 15:57:25.933273 | orchestrator | 2025-05-31 15:57:25.937920 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-05-31 15:57:25.937988 | orchestrator | 2025-05-31 15:57:25.938004 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-31 15:57:25.938532 | orchestrator | Saturday 31 May 2025 15:57:25 +0000 (0:00:00.773) 0:00:04.207 ********** 2025-05-31 15:57:25.984099 | orchestrator | ok: [testbed-node-0] 2025-05-31 15:57:26.008614 | orchestrator | ok: [testbed-node-1] 2025-05-31 15:57:26.027169 | orchestrator | ok: [testbed-node-2] 2025-05-31 15:57:26.057938 | orchestrator | ok: [testbed-node-3] 2025-05-31 15:57:26.058069 | orchestrator | ok: [testbed-node-4] 2025-05-31 15:57:26.062151 | orchestrator | ok: [testbed-node-5] 2025-05-31 15:57:26.062680 | orchestrator | 2025-05-31 15:57:26.063784 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-31 15:57:26.064639 | orchestrator | Saturday 31 May 2025 15:57:26 +0000 (0:00:00.135) 0:00:04.343 ********** 2025-05-31 15:57:26.126320 | orchestrator | ok: [testbed-node-0] 2025-05-31 15:57:26.145560 | orchestrator | ok: [testbed-node-1] 2025-05-31 15:57:26.202614 | orchestrator | ok: [testbed-node-2] 2025-05-31 15:57:26.203039 | orchestrator | ok: [testbed-node-3] 2025-05-31 15:57:26.205157 | orchestrator | ok: [testbed-node-4] 2025-05-31 15:57:26.205418 | orchestrator | ok: [testbed-node-5] 2025-05-31 15:57:26.206079 | orchestrator | 2025-05-31 15:57:26.206369 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-31 15:57:26.206849 | orchestrator | Saturday 31 May 2025 15:57:26 +0000 (0:00:00.145) 0:00:04.488 ********** 2025-05-31 15:57:26.893184 | orchestrator | changed: [testbed-node-4] 2025-05-31 15:57:26.893294 | orchestrator | changed: [testbed-node-0] 2025-05-31 15:57:26.893732 | orchestrator | changed: [testbed-node-3] 2025-05-31 15:57:26.895429 | orchestrator | changed: [testbed-node-2] 2025-05-31 15:57:26.895977 | orchestrator | changed: [testbed-node-5] 2025-05-31 15:57:26.900007 | orchestrator | changed: [testbed-node-1] 2025-05-31 15:57:26.900391 | orchestrator | 2025-05-31 15:57:26.901724 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-31 15:57:26.901856 | orchestrator | Saturday 31 May 2025 15:57:26 +0000 (0:00:00.687) 0:00:05.176 ********** 2025-05-31 15:57:27.703572 | orchestrator | changed: [testbed-node-4] 2025-05-31 15:57:27.706378 | orchestrator | changed: [testbed-node-1] 2025-05-31 15:57:27.706653 | orchestrator | changed: [testbed-node-0] 2025-05-31 15:57:27.706994 | orchestrator | changed: [testbed-node-2] 2025-05-31 15:57:27.707631 | orchestrator | changed: [testbed-node-3] 2025-05-31 15:57:27.708721 | orchestrator | changed: [testbed-node-5] 2025-05-31 15:57:27.713186 | orchestrator | 2025-05-31 15:57:27.713302 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-31 15:57:27.713871 | orchestrator | Saturday 31 May 2025 15:57:27 +0000 (0:00:00.810) 0:00:05.986 ********** 2025-05-31 15:57:29.012046 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-05-31 15:57:29.012149 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-05-31 15:57:29.013045 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-05-31 15:57:29.019037 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-05-31 15:57:29.019090 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-05-31 15:57:29.019128 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-05-31 15:57:29.019140 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-05-31 15:57:29.019237 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-05-31 15:57:29.019250 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-05-31 15:57:29.019261 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-05-31 15:57:29.019273 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-05-31 15:57:29.019340 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-05-31 15:57:29.019777 | orchestrator | 2025-05-31 15:57:29.020280 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-31 15:57:29.021008 | orchestrator | Saturday 31 May 2025 15:57:29 +0000 (0:00:01.308) 0:00:07.294 ********** 2025-05-31 15:57:30.247085 | orchestrator | changed: [testbed-node-0] 2025-05-31 15:57:30.247191 | orchestrator | changed: [testbed-node-2] 2025-05-31 15:57:30.247273 | orchestrator | changed: [testbed-node-1] 2025-05-31 15:57:30.247289 | orchestrator | changed: [testbed-node-4] 2025-05-31 15:57:30.247394 | orchestrator | changed: [testbed-node-5] 2025-05-31 15:57:30.247712 | orchestrator | changed: [testbed-node-3] 2025-05-31 15:57:30.247921 | orchestrator | 2025-05-31 15:57:30.248102 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-31 15:57:30.248315 | orchestrator | Saturday 31 May 2025 15:57:30 +0000 (0:00:01.235) 0:00:08.530 ********** 2025-05-31 15:57:31.498066 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-05-31 15:57:31.500898 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-05-31 15:57:31.501536 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-05-31 15:57:31.682302 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-05-31 15:57:31.687007 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-05-31 15:57:31.687081 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-05-31 15:57:31.687094 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-05-31 15:57:31.687106 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-05-31 15:57:31.687579 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-05-31 15:57:31.688227 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-05-31 15:57:31.688916 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-05-31 15:57:31.689570 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-05-31 15:57:31.690198 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-05-31 15:57:31.690662 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-05-31 15:57:31.691125 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-05-31 15:57:31.691731 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-05-31 15:57:31.694250 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-05-31 15:57:31.694580 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-05-31 15:57:31.695087 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-05-31 15:57:31.695639 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-05-31 15:57:31.696113 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-05-31 15:57:31.696532 | orchestrator | 2025-05-31 15:57:31.697065 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-31 15:57:31.697379 | orchestrator | Saturday 31 May 2025 15:57:31 +0000 (0:00:01.437) 0:00:09.967 ********** 2025-05-31 15:57:32.334968 | orchestrator | changed: [testbed-node-4] 2025-05-31 15:57:32.335209 | orchestrator | changed: [testbed-node-0] 2025-05-31 15:57:32.338891 | orchestrator | changed: [testbed-node-3] 2025-05-31 15:57:32.338935 | orchestrator | changed: [testbed-node-2] 2025-05-31 15:57:32.338974 | orchestrator | changed: [testbed-node-1] 2025-05-31 15:57:32.338986 | orchestrator | changed: [testbed-node-5] 2025-05-31 15:57:32.339649 | orchestrator | 2025-05-31 15:57:32.340018 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-31 15:57:32.340666 | orchestrator | Saturday 31 May 2025 15:57:32 +0000 (0:00:00.651) 0:00:10.618 ********** 2025-05-31 15:57:32.402243 | orchestrator | skipping: [testbed-node-0] 2025-05-31 15:57:32.423867 | orchestrator | skipping: [testbed-node-1] 2025-05-31 15:57:32.445874 | orchestrator | skipping: [testbed-node-2] 2025-05-31 15:57:32.485368 | orchestrator | skipping: [testbed-node-3] 2025-05-31 15:57:32.486961 | orchestrator | skipping: [testbed-node-4] 2025-05-31 15:57:32.488357 | orchestrator | skipping: [testbed-node-5] 2025-05-31 15:57:32.489099 | orchestrator | 2025-05-31 15:57:32.493554 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-31 15:57:32.494187 | orchestrator | Saturday 31 May 2025 15:57:32 +0000 (0:00:00.153) 0:00:10.771 ********** 2025-05-31 15:57:33.195167 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-31 15:57:33.195379 | orchestrator | changed: [testbed-node-0] 2025-05-31 15:57:33.196350 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-31 15:57:33.196824 | orchestrator | changed: [testbed-node-4] 2025-05-31 15:57:33.197919 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-31 15:57:33.198588 | orchestrator | changed: [testbed-node-5] 2025-05-31 15:57:33.199417 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-05-31 15:57:33.199973 | orchestrator | changed: [testbed-node-2] 2025-05-31 15:57:33.200957 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-05-31 15:57:33.201856 | orchestrator | changed: [testbed-node-1] 2025-05-31 15:57:33.202086 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-31 15:57:33.203343 | orchestrator | changed: [testbed-node-3] 2025-05-31 15:57:33.204093 | orchestrator | 2025-05-31 15:57:33.205972 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-31 15:57:33.206299 | orchestrator | Saturday 31 May 2025 15:57:33 +0000 (0:00:00.706) 0:00:11.478 ********** 2025-05-31 15:57:33.246873 | orchestrator | skipping: [testbed-node-0] 2025-05-31 15:57:33.262246 | orchestrator | skipping: [testbed-node-1] 2025-05-31 15:57:33.281596 | orchestrator | skipping: [testbed-node-2] 2025-05-31 15:57:33.343508 | orchestrator | skipping: [testbed-node-3] 2025-05-31 15:57:33.344257 | orchestrator | skipping: [testbed-node-4] 2025-05-31 15:57:33.344732 | orchestrator | skipping: [testbed-node-5] 2025-05-31 15:57:33.345501 | orchestrator | 2025-05-31 15:57:33.346301 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-31 15:57:33.346822 | orchestrator | Saturday 31 May 2025 15:57:33 +0000 (0:00:00.151) 0:00:11.629 ********** 2025-05-31 15:57:33.383492 | orchestrator | skipping: [testbed-node-0] 2025-05-31 15:57:33.405809 | orchestrator | skipping: [testbed-node-1] 2025-05-31 15:57:33.424752 | orchestrator | skipping: [testbed-node-2] 2025-05-31 15:57:33.446862 | orchestrator | skipping: [testbed-node-3] 2025-05-31 15:57:33.467553 | orchestrator | skipping: [testbed-node-4] 2025-05-31 15:57:33.467613 | orchestrator | skipping: [testbed-node-5] 2025-05-31 15:57:33.467841 | orchestrator | 2025-05-31 15:57:33.468541 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-31 15:57:33.469503 | orchestrator | Saturday 31 May 2025 15:57:33 +0000 (0:00:00.124) 0:00:11.753 ********** 2025-05-31 15:57:33.537992 | orchestrator | skipping: [testbed-node-0] 2025-05-31 15:57:33.555650 | orchestrator | skipping: [testbed-node-1] 2025-05-31 15:57:33.577856 | orchestrator | skipping: [testbed-node-2] 2025-05-31 15:57:33.604998 | orchestrator | skipping: [testbed-node-3] 2025-05-31 15:57:33.605273 | orchestrator | skipping: [testbed-node-4] 2025-05-31 15:57:33.608801 | orchestrator | skipping: [testbed-node-5] 2025-05-31 15:57:33.608856 | orchestrator | 2025-05-31 15:57:33.608942 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-31 15:57:33.609317 | orchestrator | Saturday 31 May 2025 15:57:33 +0000 (0:00:00.136) 0:00:11.890 ********** 2025-05-31 15:57:34.294689 | orchestrator | changed: [testbed-node-0] 2025-05-31 15:57:34.295774 | orchestrator | changed: [testbed-node-2] 2025-05-31 15:57:34.295938 | orchestrator | changed: [testbed-node-1] 2025-05-31 15:57:34.297555 | orchestrator | changed: [testbed-node-4] 2025-05-31 15:57:34.298548 | orchestrator | changed: [testbed-node-3] 2025-05-31 15:57:34.299320 | orchestrator | changed: [testbed-node-5] 2025-05-31 15:57:34.300112 | orchestrator | 2025-05-31 15:57:34.300898 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-31 15:57:34.302315 | orchestrator | Saturday 31 May 2025 15:57:34 +0000 (0:00:00.687) 0:00:12.578 ********** 2025-05-31 15:57:34.389919 | orchestrator | skipping: [testbed-node-0] 2025-05-31 15:57:34.426313 | orchestrator | skipping: [testbed-node-1] 2025-05-31 15:57:34.538327 | orchestrator | skipping: [testbed-node-2] 2025-05-31 15:57:34.538432 | orchestrator | skipping: [testbed-node-3] 2025-05-31 15:57:34.538497 | orchestrator | skipping: [testbed-node-4] 2025-05-31 15:57:34.538509 | orchestrator | skipping: [testbed-node-5] 2025-05-31 15:57:34.538587 | orchestrator | 2025-05-31 15:57:34.539328 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 15:57:34.539599 | orchestrator | 2025-05-31 15:57:34 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-31 15:57:34.539714 | orchestrator | 2025-05-31 15:57:34 | INFO  | Please wait and do not abort execution. 2025-05-31 15:57:34.540847 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-31 15:57:34.541845 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-31 15:57:34.542472 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-31 15:57:34.543021 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-31 15:57:34.543512 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-31 15:57:34.544011 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-31 15:57:34.544516 | orchestrator | 2025-05-31 15:57:34.545031 | orchestrator | Saturday 31 May 2025 15:57:34 +0000 (0:00:00.239) 0:00:12.817 ********** 2025-05-31 15:57:34.545798 | orchestrator | =============================================================================== 2025-05-31 15:57:34.546517 | orchestrator | Gathering Facts --------------------------------------------------------- 3.35s 2025-05-31 15:57:34.547204 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.44s 2025-05-31 15:57:34.547969 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.31s 2025-05-31 15:57:34.548639 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.24s 2025-05-31 15:57:34.549558 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.81s 2025-05-31 15:57:34.550223 | orchestrator | Do not require tty for all users ---------------------------------------- 0.77s 2025-05-31 15:57:34.550397 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.71s 2025-05-31 15:57:34.550930 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.69s 2025-05-31 15:57:34.551223 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.69s 2025-05-31 15:57:34.551675 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.65s 2025-05-31 15:57:34.552141 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.24s 2025-05-31 15:57:34.552567 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.15s 2025-05-31 15:57:34.553280 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.15s 2025-05-31 15:57:34.553505 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.15s 2025-05-31 15:57:34.553655 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.14s 2025-05-31 15:57:34.554143 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.14s 2025-05-31 15:57:34.554371 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.12s 2025-05-31 15:57:34.924763 | orchestrator | + osism apply --environment custom facts 2025-05-31 15:57:36.257109 | orchestrator | 2025-05-31 15:57:36 | INFO  | Trying to run play facts in environment custom 2025-05-31 15:57:36.312056 | orchestrator | 2025-05-31 15:57:36 | INFO  | Task 49f9523d-a225-4f55-b4e0-182486259ad4 (facts) was prepared for execution. 2025-05-31 15:57:36.312149 | orchestrator | 2025-05-31 15:57:36 | INFO  | It takes a moment until task 49f9523d-a225-4f55-b4e0-182486259ad4 (facts) has been started and output is visible here. 2025-05-31 15:57:39.300688 | orchestrator | 2025-05-31 15:57:39.300845 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-05-31 15:57:39.301972 | orchestrator | 2025-05-31 15:57:39.302468 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-31 15:57:39.303324 | orchestrator | Saturday 31 May 2025 15:57:39 +0000 (0:00:00.072) 0:00:00.072 ********** 2025-05-31 15:57:40.499861 | orchestrator | ok: [testbed-manager] 2025-05-31 15:57:41.585239 | orchestrator | changed: [testbed-node-4] 2025-05-31 15:57:41.585477 | orchestrator | changed: [testbed-node-0] 2025-05-31 15:57:41.586152 | orchestrator | changed: [testbed-node-3] 2025-05-31 15:57:41.590389 | orchestrator | changed: [testbed-node-5] 2025-05-31 15:57:41.591085 | orchestrator | changed: [testbed-node-1] 2025-05-31 15:57:41.592119 | orchestrator | changed: [testbed-node-2] 2025-05-31 15:57:41.592558 | orchestrator | 2025-05-31 15:57:41.593327 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-05-31 15:57:41.594643 | orchestrator | Saturday 31 May 2025 15:57:41 +0000 (0:00:02.285) 0:00:02.358 ********** 2025-05-31 15:57:42.703724 | orchestrator | ok: [testbed-manager] 2025-05-31 15:57:43.575317 | orchestrator | changed: [testbed-node-0] 2025-05-31 15:57:43.575402 | orchestrator | changed: [testbed-node-3] 2025-05-31 15:57:43.578069 | orchestrator | changed: [testbed-node-5] 2025-05-31 15:57:43.578100 | orchestrator | changed: [testbed-node-4] 2025-05-31 15:57:43.578286 | orchestrator | changed: [testbed-node-1] 2025-05-31 15:57:43.578580 | orchestrator | changed: [testbed-node-2] 2025-05-31 15:57:43.578849 | orchestrator | 2025-05-31 15:57:43.581889 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-05-31 15:57:43.582050 | orchestrator | 2025-05-31 15:57:43.582301 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-31 15:57:43.582606 | orchestrator | Saturday 31 May 2025 15:57:43 +0000 (0:00:01.989) 0:00:04.347 ********** 2025-05-31 15:57:43.671878 | orchestrator | ok: [testbed-node-3] 2025-05-31 15:57:43.671962 | orchestrator | ok: [testbed-node-4] 2025-05-31 15:57:43.672643 | orchestrator | ok: [testbed-node-5] 2025-05-31 15:57:43.673115 | orchestrator | 2025-05-31 15:57:43.673138 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-31 15:57:43.673297 | orchestrator | Saturday 31 May 2025 15:57:43 +0000 (0:00:00.099) 0:00:04.447 ********** 2025-05-31 15:57:43.775624 | orchestrator | ok: [testbed-node-3] 2025-05-31 15:57:43.775806 | orchestrator | ok: [testbed-node-4] 2025-05-31 15:57:43.775828 | orchestrator | ok: [testbed-node-5] 2025-05-31 15:57:43.776098 | orchestrator | 2025-05-31 15:57:43.776510 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-31 15:57:43.776603 | orchestrator | Saturday 31 May 2025 15:57:43 +0000 (0:00:00.103) 0:00:04.550 ********** 2025-05-31 15:57:43.880722 | orchestrator | ok: [testbed-node-3] 2025-05-31 15:57:43.881132 | orchestrator | ok: [testbed-node-4] 2025-05-31 15:57:43.881733 | orchestrator | ok: [testbed-node-5] 2025-05-31 15:57:43.882248 | orchestrator | 2025-05-31 15:57:43.882858 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-31 15:57:43.883421 | orchestrator | Saturday 31 May 2025 15:57:43 +0000 (0:00:00.101) 0:00:04.651 ********** 2025-05-31 15:57:43.998286 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 15:57:43.998393 | orchestrator | 2025-05-31 15:57:43.998533 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-31 15:57:43.998740 | orchestrator | Saturday 31 May 2025 15:57:43 +0000 (0:00:00.121) 0:00:04.773 ********** 2025-05-31 15:57:44.420543 | orchestrator | ok: [testbed-node-4] 2025-05-31 15:57:44.420639 | orchestrator | ok: [testbed-node-3] 2025-05-31 15:57:44.421155 | orchestrator | ok: [testbed-node-5] 2025-05-31 15:57:44.421586 | orchestrator | 2025-05-31 15:57:44.422196 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-31 15:57:44.422926 | orchestrator | Saturday 31 May 2025 15:57:44 +0000 (0:00:00.419) 0:00:05.193 ********** 2025-05-31 15:57:44.514419 | orchestrator | skipping: [testbed-node-3] 2025-05-31 15:57:44.515727 | orchestrator | skipping: [testbed-node-4] 2025-05-31 15:57:44.516298 | orchestrator | skipping: [testbed-node-5] 2025-05-31 15:57:44.518481 | orchestrator | 2025-05-31 15:57:44.519714 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-31 15:57:44.520261 | orchestrator | Saturday 31 May 2025 15:57:44 +0000 (0:00:00.092) 0:00:05.285 ********** 2025-05-31 15:57:45.490636 | orchestrator | changed: [testbed-node-3] 2025-05-31 15:57:45.490725 | orchestrator | changed: [testbed-node-5] 2025-05-31 15:57:45.490739 | orchestrator | changed: [testbed-node-4] 2025-05-31 15:57:45.490751 | orchestrator | 2025-05-31 15:57:45.494007 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-31 15:57:45.494285 | orchestrator | Saturday 31 May 2025 15:57:45 +0000 (0:00:00.977) 0:00:06.262 ********** 2025-05-31 15:57:45.934388 | orchestrator | ok: [testbed-node-3] 2025-05-31 15:57:45.934957 | orchestrator | ok: [testbed-node-4] 2025-05-31 15:57:45.937293 | orchestrator | ok: [testbed-node-5] 2025-05-31 15:57:45.937318 | orchestrator | 2025-05-31 15:57:45.937332 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-31 15:57:45.937859 | orchestrator | Saturday 31 May 2025 15:57:45 +0000 (0:00:00.446) 0:00:06.709 ********** 2025-05-31 15:57:46.941818 | orchestrator | changed: [testbed-node-4] 2025-05-31 15:57:46.946125 | orchestrator | changed: [testbed-node-3] 2025-05-31 15:57:46.947125 | orchestrator | changed: [testbed-node-5] 2025-05-31 15:57:46.949620 | orchestrator | 2025-05-31 15:57:46.952979 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-31 15:57:46.953563 | orchestrator | Saturday 31 May 2025 15:57:46 +0000 (0:00:01.004) 0:00:07.714 ********** 2025-05-31 15:58:00.802352 | orchestrator | changed: [testbed-node-5] 2025-05-31 15:58:00.802541 | orchestrator | changed: [testbed-node-3] 2025-05-31 15:58:00.802573 | orchestrator | changed: [testbed-node-4] 2025-05-31 15:58:00.802593 | orchestrator | 2025-05-31 15:58:00.802861 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-05-31 15:58:00.802886 | orchestrator | Saturday 31 May 2025 15:58:00 +0000 (0:00:13.857) 0:00:21.571 ********** 2025-05-31 15:58:00.845383 | orchestrator | skipping: [testbed-node-3] 2025-05-31 15:58:00.871764 | orchestrator | skipping: [testbed-node-4] 2025-05-31 15:58:00.874563 | orchestrator | skipping: [testbed-node-5] 2025-05-31 15:58:00.874597 | orchestrator | 2025-05-31 15:58:00.874610 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-05-31 15:58:00.874623 | orchestrator | Saturday 31 May 2025 15:58:00 +0000 (0:00:00.075) 0:00:21.646 ********** 2025-05-31 15:58:08.545210 | orchestrator | changed: [testbed-node-5] 2025-05-31 15:58:08.545331 | orchestrator | changed: [testbed-node-4] 2025-05-31 15:58:08.555632 | orchestrator | changed: [testbed-node-3] 2025-05-31 15:58:08.555667 | orchestrator | 2025-05-31 15:58:08.555681 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-31 15:58:08.555695 | orchestrator | Saturday 31 May 2025 15:58:08 +0000 (0:00:07.665) 0:00:29.311 ********** 2025-05-31 15:58:08.989765 | orchestrator | ok: [testbed-node-3] 2025-05-31 15:58:08.990599 | orchestrator | ok: [testbed-node-4] 2025-05-31 15:58:08.991220 | orchestrator | ok: [testbed-node-5] 2025-05-31 15:58:08.992657 | orchestrator | 2025-05-31 15:58:08.993149 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-31 15:58:08.994150 | orchestrator | Saturday 31 May 2025 15:58:08 +0000 (0:00:00.451) 0:00:29.763 ********** 2025-05-31 15:58:12.401247 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-05-31 15:58:12.401438 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-05-31 15:58:12.401887 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-05-31 15:58:12.402380 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-05-31 15:58:12.405073 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-05-31 15:58:12.406343 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-05-31 15:58:12.407134 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-05-31 15:58:12.407857 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-05-31 15:58:12.409062 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-05-31 15:58:12.409658 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-05-31 15:58:12.409946 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-05-31 15:58:12.410560 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-05-31 15:58:12.411898 | orchestrator | 2025-05-31 15:58:12.411927 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-31 15:58:12.412318 | orchestrator | Saturday 31 May 2025 15:58:12 +0000 (0:00:03.410) 0:00:33.173 ********** 2025-05-31 15:58:13.483736 | orchestrator | ok: [testbed-node-4] 2025-05-31 15:58:13.483836 | orchestrator | ok: [testbed-node-5] 2025-05-31 15:58:13.485400 | orchestrator | ok: [testbed-node-3] 2025-05-31 15:58:13.486339 | orchestrator | 2025-05-31 15:58:13.487037 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-31 15:58:13.487896 | orchestrator | 2025-05-31 15:58:13.488631 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-31 15:58:13.489320 | orchestrator | Saturday 31 May 2025 15:58:13 +0000 (0:00:01.082) 0:00:34.256 ********** 2025-05-31 15:58:15.183327 | orchestrator | ok: [testbed-node-0] 2025-05-31 15:58:18.395698 | orchestrator | ok: [testbed-node-1] 2025-05-31 15:58:18.396264 | orchestrator | ok: [testbed-node-2] 2025-05-31 15:58:18.397397 | orchestrator | ok: [testbed-manager] 2025-05-31 15:58:18.399657 | orchestrator | ok: [testbed-node-3] 2025-05-31 15:58:18.400506 | orchestrator | ok: [testbed-node-4] 2025-05-31 15:58:18.401155 | orchestrator | ok: [testbed-node-5] 2025-05-31 15:58:18.402272 | orchestrator | 2025-05-31 15:58:18.403276 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 15:58:18.404249 | orchestrator | 2025-05-31 15:58:18 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-31 15:58:18.404345 | orchestrator | 2025-05-31 15:58:18 | INFO  | Please wait and do not abort execution. 2025-05-31 15:58:18.405152 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 15:58:18.405551 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 15:58:18.406637 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 15:58:18.406924 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 15:58:18.407651 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 15:58:18.408133 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 15:58:18.408591 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 15:58:18.409229 | orchestrator | 2025-05-31 15:58:18.409838 | orchestrator | Saturday 31 May 2025 15:58:18 +0000 (0:00:04.912) 0:00:39.168 ********** 2025-05-31 15:58:18.410369 | orchestrator | =============================================================================== 2025-05-31 15:58:18.410744 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.86s 2025-05-31 15:58:18.411036 | orchestrator | Install required packages (Debian) -------------------------------------- 7.67s 2025-05-31 15:58:18.411484 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.91s 2025-05-31 15:58:18.411970 | orchestrator | Copy fact files --------------------------------------------------------- 3.41s 2025-05-31 15:58:18.412303 | orchestrator | Create custom facts directory ------------------------------------------- 2.29s 2025-05-31 15:58:18.412538 | orchestrator | Copy fact file ---------------------------------------------------------- 1.99s 2025-05-31 15:58:18.413005 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.08s 2025-05-31 15:58:18.413229 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.00s 2025-05-31 15:58:18.413607 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 0.98s 2025-05-31 15:58:18.414083 | orchestrator | Create custom facts directory ------------------------------------------- 0.45s 2025-05-31 15:58:18.414280 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.45s 2025-05-31 15:58:18.414632 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.42s 2025-05-31 15:58:18.414962 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.12s 2025-05-31 15:58:18.415238 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.10s 2025-05-31 15:58:18.415614 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.10s 2025-05-31 15:58:18.415930 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.10s 2025-05-31 15:58:18.417311 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.09s 2025-05-31 15:58:18.417443 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.08s 2025-05-31 15:58:18.777854 | orchestrator | + osism apply bootstrap 2025-05-31 15:58:20.139864 | orchestrator | 2025-05-31 15:58:20 | INFO  | Task 136ef880-27eb-4ddf-a0e1-a19ca166893b (bootstrap) was prepared for execution. 2025-05-31 15:58:20.139967 | orchestrator | 2025-05-31 15:58:20 | INFO  | It takes a moment until task 136ef880-27eb-4ddf-a0e1-a19ca166893b (bootstrap) has been started and output is visible here. 2025-05-31 15:58:23.179829 | orchestrator | 2025-05-31 15:58:23.180185 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-05-31 15:58:23.182921 | orchestrator | 2025-05-31 15:58:23.182952 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-05-31 15:58:23.183401 | orchestrator | Saturday 31 May 2025 15:58:23 +0000 (0:00:00.101) 0:00:00.101 ********** 2025-05-31 15:58:23.249508 | orchestrator | ok: [testbed-manager] 2025-05-31 15:58:23.272114 | orchestrator | ok: [testbed-node-3] 2025-05-31 15:58:23.306894 | orchestrator | ok: [testbed-node-4] 2025-05-31 15:58:23.332996 | orchestrator | ok: [testbed-node-5] 2025-05-31 15:58:23.410059 | orchestrator | ok: [testbed-node-0] 2025-05-31 15:58:23.410241 | orchestrator | ok: [testbed-node-1] 2025-05-31 15:58:23.411299 | orchestrator | ok: [testbed-node-2] 2025-05-31 15:58:23.412956 | orchestrator | 2025-05-31 15:58:23.413217 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-31 15:58:23.413889 | orchestrator | 2025-05-31 15:58:23.414242 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-31 15:58:23.414768 | orchestrator | Saturday 31 May 2025 15:58:23 +0000 (0:00:00.234) 0:00:00.335 ********** 2025-05-31 15:58:27.132837 | orchestrator | ok: [testbed-node-2] 2025-05-31 15:58:27.132952 | orchestrator | ok: [testbed-node-0] 2025-05-31 15:58:27.133175 | orchestrator | ok: [testbed-node-1] 2025-05-31 15:58:27.133789 | orchestrator | ok: [testbed-manager] 2025-05-31 15:58:27.134433 | orchestrator | ok: [testbed-node-4] 2025-05-31 15:58:27.134926 | orchestrator | ok: [testbed-node-5] 2025-05-31 15:58:27.135487 | orchestrator | ok: [testbed-node-3] 2025-05-31 15:58:27.136145 | orchestrator | 2025-05-31 15:58:27.136653 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-05-31 15:58:27.137295 | orchestrator | 2025-05-31 15:58:27.137621 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-31 15:58:27.138716 | orchestrator | Saturday 31 May 2025 15:58:27 +0000 (0:00:03.719) 0:00:04.054 ********** 2025-05-31 15:58:27.207880 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-31 15:58:27.240626 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-31 15:58:27.240746 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-05-31 15:58:27.244661 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-31 15:58:27.246137 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 15:58:27.288566 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-05-31 15:58:27.288621 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-31 15:58:27.288865 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-05-31 15:58:27.289231 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 15:58:27.289937 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-31 15:58:27.565425 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-31 15:58:27.565751 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-31 15:58:27.566401 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-31 15:58:27.567416 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-31 15:58:27.568859 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 15:58:27.569118 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-05-31 15:58:27.572241 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-31 15:58:27.572361 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-05-31 15:58:27.573240 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-31 15:58:27.573849 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-31 15:58:27.574402 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-31 15:58:27.574959 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-31 15:58:27.575554 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-31 15:58:27.576056 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-31 15:58:27.578955 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-31 15:58:27.581802 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-31 15:58:27.581850 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-31 15:58:27.581863 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-31 15:58:27.581874 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:58:27.581912 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-31 15:58:27.581925 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-31 15:58:27.581935 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-31 15:58:27.582119 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-31 15:58:27.582784 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-31 15:58:27.583050 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-31 15:58:27.583517 | orchestrator | skipping: [testbed-node-3] 2025-05-31 15:58:27.583886 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-05-31 15:58:27.584164 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-31 15:58:27.584629 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-31 15:58:27.584960 | orchestrator | skipping: [testbed-node-4] 2025-05-31 15:58:27.585332 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-31 15:58:27.585729 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-31 15:58:27.586129 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-31 15:58:27.586392 | orchestrator | skipping: [testbed-node-5] 2025-05-31 15:58:27.586819 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-31 15:58:27.587131 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-31 15:58:27.590432 | orchestrator | skipping: [testbed-node-1] 2025-05-31 15:58:27.590668 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-31 15:58:27.591106 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-31 15:58:27.591689 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-31 15:58:27.591972 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-31 15:58:27.592824 | orchestrator | skipping: [testbed-node-0] 2025-05-31 15:58:27.592878 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-31 15:58:27.592891 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-31 15:58:27.592958 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-31 15:58:27.593303 | orchestrator | skipping: [testbed-node-2] 2025-05-31 15:58:27.593605 | orchestrator | 2025-05-31 15:58:27.593992 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-05-31 15:58:27.594232 | orchestrator | 2025-05-31 15:58:27.594548 | orchestrator | TASK [osism.commons.hostname : Set hostname_name fact] ************************* 2025-05-31 15:58:27.594841 | orchestrator | Saturday 31 May 2025 15:58:27 +0000 (0:00:00.434) 0:00:04.489 ********** 2025-05-31 15:58:27.648505 | orchestrator | ok: [testbed-manager] 2025-05-31 15:58:27.669223 | orchestrator | ok: [testbed-node-3] 2025-05-31 15:58:27.690623 | orchestrator | ok: [testbed-node-4] 2025-05-31 15:58:27.711563 | orchestrator | ok: [testbed-node-5] 2025-05-31 15:58:27.763206 | orchestrator | ok: [testbed-node-0] 2025-05-31 15:58:27.766917 | orchestrator | ok: [testbed-node-1] 2025-05-31 15:58:27.766951 | orchestrator | ok: [testbed-node-2] 2025-05-31 15:58:27.766964 | orchestrator | 2025-05-31 15:58:27.766978 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-05-31 15:58:27.766991 | orchestrator | Saturday 31 May 2025 15:58:27 +0000 (0:00:00.197) 0:00:04.687 ********** 2025-05-31 15:58:28.952777 | orchestrator | ok: [testbed-node-5] 2025-05-31 15:58:28.953553 | orchestrator | ok: [testbed-node-2] 2025-05-31 15:58:28.954596 | orchestrator | ok: [testbed-node-0] 2025-05-31 15:58:28.955328 | orchestrator | ok: [testbed-manager] 2025-05-31 15:58:28.955826 | orchestrator | ok: [testbed-node-4] 2025-05-31 15:58:28.956109 | orchestrator | ok: [testbed-node-3] 2025-05-31 15:58:28.956588 | orchestrator | ok: [testbed-node-1] 2025-05-31 15:58:28.956993 | orchestrator | 2025-05-31 15:58:28.957335 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-05-31 15:58:28.957821 | orchestrator | Saturday 31 May 2025 15:58:28 +0000 (0:00:01.188) 0:00:05.876 ********** 2025-05-31 15:58:30.078443 | orchestrator | ok: [testbed-manager] 2025-05-31 15:58:30.078844 | orchestrator | ok: [testbed-node-4] 2025-05-31 15:58:30.078877 | orchestrator | ok: [testbed-node-3] 2025-05-31 15:58:30.079508 | orchestrator | ok: [testbed-node-2] 2025-05-31 15:58:30.080010 | orchestrator | ok: [testbed-node-0] 2025-05-31 15:58:30.080342 | orchestrator | ok: [testbed-node-5] 2025-05-31 15:58:30.080881 | orchestrator | ok: [testbed-node-1] 2025-05-31 15:58:30.081410 | orchestrator | 2025-05-31 15:58:30.082115 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-05-31 15:58:30.082949 | orchestrator | Saturday 31 May 2025 15:58:30 +0000 (0:00:01.123) 0:00:06.999 ********** 2025-05-31 15:58:30.332236 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 15:58:30.332340 | orchestrator | 2025-05-31 15:58:30.335894 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-05-31 15:58:30.335940 | orchestrator | Saturday 31 May 2025 15:58:30 +0000 (0:00:00.255) 0:00:07.255 ********** 2025-05-31 15:58:32.284520 | orchestrator | changed: [testbed-manager] 2025-05-31 15:58:32.284702 | orchestrator | changed: [testbed-node-0] 2025-05-31 15:58:32.285094 | orchestrator | changed: [testbed-node-1] 2025-05-31 15:58:32.285826 | orchestrator | changed: [testbed-node-5] 2025-05-31 15:58:32.288810 | orchestrator | changed: [testbed-node-4] 2025-05-31 15:58:32.288860 | orchestrator | changed: [testbed-node-3] 2025-05-31 15:58:32.289088 | orchestrator | changed: [testbed-node-2] 2025-05-31 15:58:32.289448 | orchestrator | 2025-05-31 15:58:32.289824 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-05-31 15:58:32.290174 | orchestrator | Saturday 31 May 2025 15:58:32 +0000 (0:00:01.951) 0:00:09.207 ********** 2025-05-31 15:58:32.338196 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:58:32.530819 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 15:58:32.531302 | orchestrator | 2025-05-31 15:58:32.531715 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-05-31 15:58:32.533132 | orchestrator | Saturday 31 May 2025 15:58:32 +0000 (0:00:00.245) 0:00:09.452 ********** 2025-05-31 15:58:33.571177 | orchestrator | changed: [testbed-node-5] 2025-05-31 15:58:33.571890 | orchestrator | changed: [testbed-node-4] 2025-05-31 15:58:33.572591 | orchestrator | changed: [testbed-node-0] 2025-05-31 15:58:33.572850 | orchestrator | changed: [testbed-node-3] 2025-05-31 15:58:33.573628 | orchestrator | changed: [testbed-node-1] 2025-05-31 15:58:33.574378 | orchestrator | changed: [testbed-node-2] 2025-05-31 15:58:33.575015 | orchestrator | 2025-05-31 15:58:33.575881 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-05-31 15:58:33.576205 | orchestrator | Saturday 31 May 2025 15:58:33 +0000 (0:00:01.041) 0:00:10.494 ********** 2025-05-31 15:58:33.641566 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:58:34.139798 | orchestrator | changed: [testbed-node-4] 2025-05-31 15:58:34.139908 | orchestrator | changed: [testbed-node-5] 2025-05-31 15:58:34.139923 | orchestrator | changed: [testbed-node-0] 2025-05-31 15:58:34.140421 | orchestrator | changed: [testbed-node-1] 2025-05-31 15:58:34.141134 | orchestrator | changed: [testbed-node-2] 2025-05-31 15:58:34.141691 | orchestrator | changed: [testbed-node-3] 2025-05-31 15:58:34.142078 | orchestrator | 2025-05-31 15:58:34.143851 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-05-31 15:58:34.144057 | orchestrator | Saturday 31 May 2025 15:58:34 +0000 (0:00:00.568) 0:00:11.063 ********** 2025-05-31 15:58:34.232512 | orchestrator | skipping: [testbed-node-3] 2025-05-31 15:58:34.253444 | orchestrator | skipping: [testbed-node-4] 2025-05-31 15:58:34.275593 | orchestrator | skipping: [testbed-node-5] 2025-05-31 15:58:34.563227 | orchestrator | skipping: [testbed-node-0] 2025-05-31 15:58:34.563324 | orchestrator | skipping: [testbed-node-1] 2025-05-31 15:58:34.563330 | orchestrator | skipping: [testbed-node-2] 2025-05-31 15:58:34.563335 | orchestrator | ok: [testbed-manager] 2025-05-31 15:58:34.563341 | orchestrator | 2025-05-31 15:58:34.563347 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-31 15:58:34.563353 | orchestrator | Saturday 31 May 2025 15:58:34 +0000 (0:00:00.420) 0:00:11.483 ********** 2025-05-31 15:58:34.636752 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:58:34.663001 | orchestrator | skipping: [testbed-node-3] 2025-05-31 15:58:34.687971 | orchestrator | skipping: [testbed-node-4] 2025-05-31 15:58:34.709535 | orchestrator | skipping: [testbed-node-5] 2025-05-31 15:58:34.767057 | orchestrator | skipping: [testbed-node-0] 2025-05-31 15:58:34.769172 | orchestrator | skipping: [testbed-node-1] 2025-05-31 15:58:34.769250 | orchestrator | skipping: [testbed-node-2] 2025-05-31 15:58:34.769261 | orchestrator | 2025-05-31 15:58:34.770132 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-31 15:58:34.770187 | orchestrator | Saturday 31 May 2025 15:58:34 +0000 (0:00:00.207) 0:00:11.691 ********** 2025-05-31 15:58:35.029354 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 15:58:35.029559 | orchestrator | 2025-05-31 15:58:35.030332 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-31 15:58:35.030834 | orchestrator | Saturday 31 May 2025 15:58:35 +0000 (0:00:00.262) 0:00:11.953 ********** 2025-05-31 15:58:35.318785 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 15:58:35.319072 | orchestrator | 2025-05-31 15:58:35.319174 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-31 15:58:35.319532 | orchestrator | Saturday 31 May 2025 15:58:35 +0000 (0:00:00.289) 0:00:12.242 ********** 2025-05-31 15:58:36.692047 | orchestrator | ok: [testbed-manager] 2025-05-31 15:58:36.695665 | orchestrator | ok: [testbed-node-4] 2025-05-31 15:58:36.695717 | orchestrator | ok: [testbed-node-0] 2025-05-31 15:58:36.695730 | orchestrator | ok: [testbed-node-1] 2025-05-31 15:58:36.695763 | orchestrator | ok: [testbed-node-3] 2025-05-31 15:58:36.695775 | orchestrator | ok: [testbed-node-2] 2025-05-31 15:58:36.696083 | orchestrator | ok: [testbed-node-5] 2025-05-31 15:58:36.696574 | orchestrator | 2025-05-31 15:58:36.697118 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-31 15:58:36.699037 | orchestrator | Saturday 31 May 2025 15:58:36 +0000 (0:00:01.372) 0:00:13.614 ********** 2025-05-31 15:58:36.762492 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:58:36.787394 | orchestrator | skipping: [testbed-node-3] 2025-05-31 15:58:36.818827 | orchestrator | skipping: [testbed-node-4] 2025-05-31 15:58:36.844156 | orchestrator | skipping: [testbed-node-5] 2025-05-31 15:58:36.896336 | orchestrator | skipping: [testbed-node-0] 2025-05-31 15:58:36.896567 | orchestrator | skipping: [testbed-node-1] 2025-05-31 15:58:36.898195 | orchestrator | skipping: [testbed-node-2] 2025-05-31 15:58:36.898625 | orchestrator | 2025-05-31 15:58:36.898973 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-31 15:58:36.899188 | orchestrator | Saturday 31 May 2025 15:58:36 +0000 (0:00:00.206) 0:00:13.821 ********** 2025-05-31 15:58:37.419159 | orchestrator | ok: [testbed-manager] 2025-05-31 15:58:37.419269 | orchestrator | ok: [testbed-node-4] 2025-05-31 15:58:37.420097 | orchestrator | ok: [testbed-node-3] 2025-05-31 15:58:37.420944 | orchestrator | ok: [testbed-node-5] 2025-05-31 15:58:37.421959 | orchestrator | ok: [testbed-node-0] 2025-05-31 15:58:37.423210 | orchestrator | ok: [testbed-node-1] 2025-05-31 15:58:37.423868 | orchestrator | ok: [testbed-node-2] 2025-05-31 15:58:37.426114 | orchestrator | 2025-05-31 15:58:37.426144 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-31 15:58:37.426989 | orchestrator | Saturday 31 May 2025 15:58:37 +0000 (0:00:00.520) 0:00:14.341 ********** 2025-05-31 15:58:37.497296 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:58:37.522853 | orchestrator | skipping: [testbed-node-3] 2025-05-31 15:58:37.543828 | orchestrator | skipping: [testbed-node-4] 2025-05-31 15:58:37.569287 | orchestrator | skipping: [testbed-node-5] 2025-05-31 15:58:37.638744 | orchestrator | skipping: [testbed-node-0] 2025-05-31 15:58:37.638846 | orchestrator | skipping: [testbed-node-1] 2025-05-31 15:58:37.639143 | orchestrator | skipping: [testbed-node-2] 2025-05-31 15:58:37.639887 | orchestrator | 2025-05-31 15:58:37.640676 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-31 15:58:37.641092 | orchestrator | Saturday 31 May 2025 15:58:37 +0000 (0:00:00.221) 0:00:14.563 ********** 2025-05-31 15:58:38.190371 | orchestrator | ok: [testbed-manager] 2025-05-31 15:58:38.192875 | orchestrator | changed: [testbed-node-3] 2025-05-31 15:58:38.192923 | orchestrator | changed: [testbed-node-4] 2025-05-31 15:58:38.193535 | orchestrator | changed: [testbed-node-5] 2025-05-31 15:58:38.194692 | orchestrator | changed: [testbed-node-0] 2025-05-31 15:58:38.195675 | orchestrator | changed: [testbed-node-1] 2025-05-31 15:58:38.196504 | orchestrator | changed: [testbed-node-2] 2025-05-31 15:58:38.197333 | orchestrator | 2025-05-31 15:58:38.198074 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-31 15:58:38.198799 | orchestrator | Saturday 31 May 2025 15:58:38 +0000 (0:00:00.549) 0:00:15.113 ********** 2025-05-31 15:58:39.237561 | orchestrator | ok: [testbed-manager] 2025-05-31 15:58:39.237770 | orchestrator | changed: [testbed-node-4] 2025-05-31 15:58:39.238611 | orchestrator | changed: [testbed-node-3] 2025-05-31 15:58:39.240213 | orchestrator | changed: [testbed-node-0] 2025-05-31 15:58:39.240561 | orchestrator | changed: [testbed-node-5] 2025-05-31 15:58:39.241650 | orchestrator | changed: [testbed-node-1] 2025-05-31 15:58:39.242631 | orchestrator | changed: [testbed-node-2] 2025-05-31 15:58:39.243118 | orchestrator | 2025-05-31 15:58:39.243325 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-31 15:58:39.244227 | orchestrator | Saturday 31 May 2025 15:58:39 +0000 (0:00:01.046) 0:00:16.159 ********** 2025-05-31 15:58:40.323017 | orchestrator | ok: [testbed-manager] 2025-05-31 15:58:40.324117 | orchestrator | ok: [testbed-node-0] 2025-05-31 15:58:40.325510 | orchestrator | ok: [testbed-node-4] 2025-05-31 15:58:40.326194 | orchestrator | ok: [testbed-node-2] 2025-05-31 15:58:40.327398 | orchestrator | ok: [testbed-node-5] 2025-05-31 15:58:40.328664 | orchestrator | ok: [testbed-node-1] 2025-05-31 15:58:40.329136 | orchestrator | ok: [testbed-node-3] 2025-05-31 15:58:40.329957 | orchestrator | 2025-05-31 15:58:40.330521 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-31 15:58:40.331148 | orchestrator | Saturday 31 May 2025 15:58:40 +0000 (0:00:01.085) 0:00:17.245 ********** 2025-05-31 15:58:40.590573 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 15:58:40.591026 | orchestrator | 2025-05-31 15:58:40.591544 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-31 15:58:40.593230 | orchestrator | Saturday 31 May 2025 15:58:40 +0000 (0:00:00.269) 0:00:17.514 ********** 2025-05-31 15:58:40.661346 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:58:42.042837 | orchestrator | changed: [testbed-node-4] 2025-05-31 15:58:42.042945 | orchestrator | changed: [testbed-node-2] 2025-05-31 15:58:42.042960 | orchestrator | changed: [testbed-node-5] 2025-05-31 15:58:42.042973 | orchestrator | changed: [testbed-node-3] 2025-05-31 15:58:42.042984 | orchestrator | changed: [testbed-node-1] 2025-05-31 15:58:42.042995 | orchestrator | changed: [testbed-node-0] 2025-05-31 15:58:42.044692 | orchestrator | 2025-05-31 15:58:42.045963 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-31 15:58:42.047328 | orchestrator | Saturday 31 May 2025 15:58:42 +0000 (0:00:01.446) 0:00:18.961 ********** 2025-05-31 15:58:42.102931 | orchestrator | ok: [testbed-manager] 2025-05-31 15:58:42.129721 | orchestrator | ok: [testbed-node-3] 2025-05-31 15:58:42.144590 | orchestrator | ok: [testbed-node-4] 2025-05-31 15:58:42.166637 | orchestrator | ok: [testbed-node-5] 2025-05-31 15:58:42.213619 | orchestrator | ok: [testbed-node-0] 2025-05-31 15:58:42.217041 | orchestrator | ok: [testbed-node-1] 2025-05-31 15:58:42.217119 | orchestrator | ok: [testbed-node-2] 2025-05-31 15:58:42.217135 | orchestrator | 2025-05-31 15:58:42.217149 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-31 15:58:42.218006 | orchestrator | Saturday 31 May 2025 15:58:42 +0000 (0:00:00.176) 0:00:19.138 ********** 2025-05-31 15:58:42.275957 | orchestrator | ok: [testbed-manager] 2025-05-31 15:58:42.320546 | orchestrator | ok: [testbed-node-3] 2025-05-31 15:58:42.341728 | orchestrator | ok: [testbed-node-4] 2025-05-31 15:58:42.410876 | orchestrator | ok: [testbed-node-5] 2025-05-31 15:58:42.411649 | orchestrator | ok: [testbed-node-0] 2025-05-31 15:58:42.412676 | orchestrator | ok: [testbed-node-1] 2025-05-31 15:58:42.413171 | orchestrator | ok: [testbed-node-2] 2025-05-31 15:58:42.413886 | orchestrator | 2025-05-31 15:58:42.414509 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-31 15:58:42.415105 | orchestrator | Saturday 31 May 2025 15:58:42 +0000 (0:00:00.196) 0:00:19.335 ********** 2025-05-31 15:58:42.481590 | orchestrator | ok: [testbed-manager] 2025-05-31 15:58:42.504734 | orchestrator | ok: [testbed-node-3] 2025-05-31 15:58:42.529666 | orchestrator | ok: [testbed-node-4] 2025-05-31 15:58:42.550938 | orchestrator | ok: [testbed-node-5] 2025-05-31 15:58:42.602730 | orchestrator | ok: [testbed-node-0] 2025-05-31 15:58:42.603095 | orchestrator | ok: [testbed-node-1] 2025-05-31 15:58:42.604123 | orchestrator | ok: [testbed-node-2] 2025-05-31 15:58:42.604857 | orchestrator | 2025-05-31 15:58:42.605307 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-31 15:58:42.606115 | orchestrator | Saturday 31 May 2025 15:58:42 +0000 (0:00:00.192) 0:00:19.527 ********** 2025-05-31 15:58:42.850536 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 15:58:42.850926 | orchestrator | 2025-05-31 15:58:42.851083 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-31 15:58:42.852277 | orchestrator | Saturday 31 May 2025 15:58:42 +0000 (0:00:00.245) 0:00:19.773 ********** 2025-05-31 15:58:43.367147 | orchestrator | ok: [testbed-manager] 2025-05-31 15:58:43.367248 | orchestrator | ok: [testbed-node-3] 2025-05-31 15:58:43.367270 | orchestrator | ok: [testbed-node-5] 2025-05-31 15:58:43.367289 | orchestrator | ok: [testbed-node-4] 2025-05-31 15:58:43.368221 | orchestrator | ok: [testbed-node-0] 2025-05-31 15:58:43.368250 | orchestrator | ok: [testbed-node-1] 2025-05-31 15:58:43.368261 | orchestrator | ok: [testbed-node-2] 2025-05-31 15:58:43.368274 | orchestrator | 2025-05-31 15:58:43.368288 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-31 15:58:43.368348 | orchestrator | Saturday 31 May 2025 15:58:43 +0000 (0:00:00.515) 0:00:20.288 ********** 2025-05-31 15:58:43.464305 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:58:43.485144 | orchestrator | skipping: [testbed-node-3] 2025-05-31 15:58:43.505846 | orchestrator | skipping: [testbed-node-4] 2025-05-31 15:58:43.566699 | orchestrator | skipping: [testbed-node-5] 2025-05-31 15:58:43.567150 | orchestrator | skipping: [testbed-node-0] 2025-05-31 15:58:43.570250 | orchestrator | skipping: [testbed-node-1] 2025-05-31 15:58:43.571080 | orchestrator | skipping: [testbed-node-2] 2025-05-31 15:58:43.571498 | orchestrator | 2025-05-31 15:58:43.571971 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-31 15:58:43.572551 | orchestrator | Saturday 31 May 2025 15:58:43 +0000 (0:00:00.202) 0:00:20.491 ********** 2025-05-31 15:58:44.616337 | orchestrator | changed: [testbed-manager] 2025-05-31 15:58:44.616991 | orchestrator | ok: [testbed-node-5] 2025-05-31 15:58:44.617417 | orchestrator | ok: [testbed-node-4] 2025-05-31 15:58:44.618142 | orchestrator | ok: [testbed-node-3] 2025-05-31 15:58:44.619073 | orchestrator | changed: [testbed-node-0] 2025-05-31 15:58:44.619375 | orchestrator | changed: [testbed-node-1] 2025-05-31 15:58:44.620349 | orchestrator | changed: [testbed-node-2] 2025-05-31 15:58:44.620653 | orchestrator | 2025-05-31 15:58:44.621164 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-31 15:58:44.621812 | orchestrator | Saturday 31 May 2025 15:58:44 +0000 (0:00:01.047) 0:00:21.539 ********** 2025-05-31 15:58:45.133508 | orchestrator | ok: [testbed-manager] 2025-05-31 15:58:45.133624 | orchestrator | ok: [testbed-node-4] 2025-05-31 15:58:45.133645 | orchestrator | ok: [testbed-node-3] 2025-05-31 15:58:45.134124 | orchestrator | ok: [testbed-node-5] 2025-05-31 15:58:45.135147 | orchestrator | ok: [testbed-node-0] 2025-05-31 15:58:45.135977 | orchestrator | ok: [testbed-node-1] 2025-05-31 15:58:45.136118 | orchestrator | ok: [testbed-node-2] 2025-05-31 15:58:45.136784 | orchestrator | 2025-05-31 15:58:45.137625 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-31 15:58:45.137822 | orchestrator | Saturday 31 May 2025 15:58:45 +0000 (0:00:00.515) 0:00:22.055 ********** 2025-05-31 15:58:46.208456 | orchestrator | ok: [testbed-manager] 2025-05-31 15:58:46.209058 | orchestrator | ok: [testbed-node-4] 2025-05-31 15:58:46.209093 | orchestrator | ok: [testbed-node-5] 2025-05-31 15:58:46.209351 | orchestrator | ok: [testbed-node-3] 2025-05-31 15:58:46.210009 | orchestrator | changed: [testbed-node-0] 2025-05-31 15:58:46.211395 | orchestrator | changed: [testbed-node-2] 2025-05-31 15:58:46.211552 | orchestrator | changed: [testbed-node-1] 2025-05-31 15:58:46.212165 | orchestrator | 2025-05-31 15:58:46.212631 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-31 15:58:46.213599 | orchestrator | Saturday 31 May 2025 15:58:46 +0000 (0:00:01.075) 0:00:23.130 ********** 2025-05-31 15:59:00.260805 | orchestrator | ok: [testbed-node-4] 2025-05-31 15:59:00.262838 | orchestrator | ok: [testbed-node-5] 2025-05-31 15:59:00.262870 | orchestrator | ok: [testbed-node-3] 2025-05-31 15:59:00.263077 | orchestrator | changed: [testbed-manager] 2025-05-31 15:59:00.263918 | orchestrator | changed: [testbed-node-0] 2025-05-31 15:59:00.266580 | orchestrator | changed: [testbed-node-2] 2025-05-31 15:59:00.267096 | orchestrator | changed: [testbed-node-1] 2025-05-31 15:59:00.268093 | orchestrator | 2025-05-31 15:59:00.268924 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-05-31 15:59:00.269416 | orchestrator | Saturday 31 May 2025 15:59:00 +0000 (0:00:14.051) 0:00:37.181 ********** 2025-05-31 15:59:00.327619 | orchestrator | ok: [testbed-manager] 2025-05-31 15:59:00.351438 | orchestrator | ok: [testbed-node-3] 2025-05-31 15:59:00.372669 | orchestrator | ok: [testbed-node-4] 2025-05-31 15:59:00.398301 | orchestrator | ok: [testbed-node-5] 2025-05-31 15:59:00.464959 | orchestrator | ok: [testbed-node-0] 2025-05-31 15:59:00.465133 | orchestrator | ok: [testbed-node-1] 2025-05-31 15:59:00.468767 | orchestrator | ok: [testbed-node-2] 2025-05-31 15:59:00.468800 | orchestrator | 2025-05-31 15:59:00.470321 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-05-31 15:59:00.470987 | orchestrator | Saturday 31 May 2025 15:59:00 +0000 (0:00:00.207) 0:00:37.389 ********** 2025-05-31 15:59:00.532204 | orchestrator | ok: [testbed-manager] 2025-05-31 15:59:00.554797 | orchestrator | ok: [testbed-node-3] 2025-05-31 15:59:00.575426 | orchestrator | ok: [testbed-node-4] 2025-05-31 15:59:00.597963 | orchestrator | ok: [testbed-node-5] 2025-05-31 15:59:00.648034 | orchestrator | ok: [testbed-node-0] 2025-05-31 15:59:00.648486 | orchestrator | ok: [testbed-node-1] 2025-05-31 15:59:00.649557 | orchestrator | ok: [testbed-node-2] 2025-05-31 15:59:00.650007 | orchestrator | 2025-05-31 15:59:00.650373 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-05-31 15:59:00.650928 | orchestrator | Saturday 31 May 2025 15:59:00 +0000 (0:00:00.183) 0:00:37.573 ********** 2025-05-31 15:59:00.710398 | orchestrator | ok: [testbed-manager] 2025-05-31 15:59:00.733637 | orchestrator | ok: [testbed-node-3] 2025-05-31 15:59:00.752603 | orchestrator | ok: [testbed-node-4] 2025-05-31 15:59:00.778336 | orchestrator | ok: [testbed-node-5] 2025-05-31 15:59:00.834670 | orchestrator | ok: [testbed-node-0] 2025-05-31 15:59:00.835225 | orchestrator | ok: [testbed-node-1] 2025-05-31 15:59:00.836753 | orchestrator | ok: [testbed-node-2] 2025-05-31 15:59:00.837658 | orchestrator | 2025-05-31 15:59:00.838613 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-05-31 15:59:00.839904 | orchestrator | Saturday 31 May 2025 15:59:00 +0000 (0:00:00.186) 0:00:37.759 ********** 2025-05-31 15:59:01.083438 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 15:59:01.083696 | orchestrator | 2025-05-31 15:59:01.084787 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-05-31 15:59:01.085865 | orchestrator | Saturday 31 May 2025 15:59:01 +0000 (0:00:00.248) 0:00:38.007 ********** 2025-05-31 15:59:02.655726 | orchestrator | ok: [testbed-manager] 2025-05-31 15:59:02.657340 | orchestrator | ok: [testbed-node-5] 2025-05-31 15:59:02.658098 | orchestrator | ok: [testbed-node-4] 2025-05-31 15:59:02.658700 | orchestrator | ok: [testbed-node-2] 2025-05-31 15:59:02.660158 | orchestrator | ok: [testbed-node-0] 2025-05-31 15:59:02.660727 | orchestrator | ok: [testbed-node-1] 2025-05-31 15:59:02.661309 | orchestrator | ok: [testbed-node-3] 2025-05-31 15:59:02.662145 | orchestrator | 2025-05-31 15:59:02.662376 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-05-31 15:59:02.662906 | orchestrator | Saturday 31 May 2025 15:59:02 +0000 (0:00:01.570) 0:00:39.578 ********** 2025-05-31 15:59:03.759237 | orchestrator | changed: [testbed-manager] 2025-05-31 15:59:03.759344 | orchestrator | changed: [testbed-node-5] 2025-05-31 15:59:03.759359 | orchestrator | changed: [testbed-node-3] 2025-05-31 15:59:03.759776 | orchestrator | changed: [testbed-node-0] 2025-05-31 15:59:03.760627 | orchestrator | changed: [testbed-node-4] 2025-05-31 15:59:03.761250 | orchestrator | changed: [testbed-node-1] 2025-05-31 15:59:03.761637 | orchestrator | changed: [testbed-node-2] 2025-05-31 15:59:03.763317 | orchestrator | 2025-05-31 15:59:03.763697 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-05-31 15:59:03.764145 | orchestrator | Saturday 31 May 2025 15:59:03 +0000 (0:00:01.102) 0:00:40.681 ********** 2025-05-31 15:59:04.588692 | orchestrator | ok: [testbed-manager] 2025-05-31 15:59:04.588858 | orchestrator | ok: [testbed-node-4] 2025-05-31 15:59:04.589614 | orchestrator | ok: [testbed-node-5] 2025-05-31 15:59:04.590210 | orchestrator | ok: [testbed-node-3] 2025-05-31 15:59:04.591403 | orchestrator | ok: [testbed-node-0] 2025-05-31 15:59:04.591423 | orchestrator | ok: [testbed-node-2] 2025-05-31 15:59:04.592388 | orchestrator | ok: [testbed-node-1] 2025-05-31 15:59:04.592896 | orchestrator | 2025-05-31 15:59:04.593765 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-05-31 15:59:04.594408 | orchestrator | Saturday 31 May 2025 15:59:04 +0000 (0:00:00.831) 0:00:41.512 ********** 2025-05-31 15:59:04.870447 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 15:59:04.871731 | orchestrator | 2025-05-31 15:59:04.872055 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-05-31 15:59:04.873175 | orchestrator | Saturday 31 May 2025 15:59:04 +0000 (0:00:00.278) 0:00:41.790 ********** 2025-05-31 15:59:05.865745 | orchestrator | changed: [testbed-manager] 2025-05-31 15:59:05.865898 | orchestrator | changed: [testbed-node-3] 2025-05-31 15:59:05.869258 | orchestrator | changed: [testbed-node-4] 2025-05-31 15:59:05.869303 | orchestrator | changed: [testbed-node-5] 2025-05-31 15:59:05.869313 | orchestrator | changed: [testbed-node-0] 2025-05-31 15:59:05.869322 | orchestrator | changed: [testbed-node-1] 2025-05-31 15:59:05.869860 | orchestrator | changed: [testbed-node-2] 2025-05-31 15:59:05.870498 | orchestrator | 2025-05-31 15:59:05.871488 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-05-31 15:59:05.871955 | orchestrator | Saturday 31 May 2025 15:59:05 +0000 (0:00:00.998) 0:00:42.789 ********** 2025-05-31 15:59:05.937576 | orchestrator | skipping: [testbed-manager] 2025-05-31 15:59:05.957176 | orchestrator | skipping: [testbed-node-3] 2025-05-31 15:59:05.979771 | orchestrator | skipping: [testbed-node-4] 2025-05-31 15:59:06.004268 | orchestrator | skipping: [testbed-node-5] 2025-05-31 15:59:06.147992 | orchestrator | skipping: [testbed-node-0] 2025-05-31 15:59:06.148116 | orchestrator | skipping: [testbed-node-1] 2025-05-31 15:59:06.149282 | orchestrator | skipping: [testbed-node-2] 2025-05-31 15:59:06.149704 | orchestrator | 2025-05-31 15:59:06.150732 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-05-31 15:59:06.151340 | orchestrator | Saturday 31 May 2025 15:59:06 +0000 (0:00:00.283) 0:00:43.072 ********** 2025-05-31 15:59:16.585145 | orchestrator | changed: [testbed-node-4] 2025-05-31 15:59:16.585267 | orchestrator | changed: [testbed-node-5] 2025-05-31 15:59:16.585284 | orchestrator | changed: [testbed-node-2] 2025-05-31 15:59:16.585296 | orchestrator | changed: [testbed-node-0] 2025-05-31 15:59:16.585392 | orchestrator | changed: [testbed-node-1] 2025-05-31 15:59:16.586354 | orchestrator | changed: [testbed-node-3] 2025-05-31 15:59:16.586808 | orchestrator | changed: [testbed-manager] 2025-05-31 15:59:16.587185 | orchestrator | 2025-05-31 15:59:16.588222 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-05-31 15:59:16.588460 | orchestrator | Saturday 31 May 2025 15:59:16 +0000 (0:00:10.431) 0:00:53.504 ********** 2025-05-31 15:59:17.656768 | orchestrator | ok: [testbed-node-2] 2025-05-31 15:59:17.656958 | orchestrator | ok: [testbed-manager] 2025-05-31 15:59:17.657085 | orchestrator | ok: [testbed-node-4] 2025-05-31 15:59:17.658780 | orchestrator | ok: [testbed-node-0] 2025-05-31 15:59:17.658984 | orchestrator | ok: [testbed-node-5] 2025-05-31 15:59:17.659393 | orchestrator | ok: [testbed-node-3] 2025-05-31 15:59:17.659856 | orchestrator | ok: [testbed-node-1] 2025-05-31 15:59:17.660305 | orchestrator | 2025-05-31 15:59:17.660933 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-05-31 15:59:17.661192 | orchestrator | Saturday 31 May 2025 15:59:17 +0000 (0:00:01.074) 0:00:54.579 ********** 2025-05-31 15:59:18.498199 | orchestrator | ok: [testbed-manager] 2025-05-31 15:59:18.498367 | orchestrator | ok: [testbed-node-4] 2025-05-31 15:59:18.499615 | orchestrator | ok: [testbed-node-3] 2025-05-31 15:59:18.503236 | orchestrator | ok: [testbed-node-5] 2025-05-31 15:59:18.503291 | orchestrator | ok: [testbed-node-0] 2025-05-31 15:59:18.503304 | orchestrator | ok: [testbed-node-1] 2025-05-31 15:59:18.503315 | orchestrator | ok: [testbed-node-2] 2025-05-31 15:59:18.503327 | orchestrator | 2025-05-31 15:59:18.503579 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-05-31 15:59:18.504670 | orchestrator | Saturday 31 May 2025 15:59:18 +0000 (0:00:00.842) 0:00:55.421 ********** 2025-05-31 15:59:18.594356 | orchestrator | ok: [testbed-manager] 2025-05-31 15:59:18.618739 | orchestrator | ok: [testbed-node-3] 2025-05-31 15:59:18.641335 | orchestrator | ok: [testbed-node-4] 2025-05-31 15:59:18.665838 | orchestrator | ok: [testbed-node-5] 2025-05-31 15:59:18.724411 | orchestrator | ok: [testbed-node-0] 2025-05-31 15:59:18.726097 | orchestrator | ok: [testbed-node-1] 2025-05-31 15:59:18.726123 | orchestrator | ok: [testbed-node-2] 2025-05-31 15:59:18.726135 | orchestrator | 2025-05-31 15:59:18.726952 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-05-31 15:59:18.727613 | orchestrator | Saturday 31 May 2025 15:59:18 +0000 (0:00:00.226) 0:00:55.648 ********** 2025-05-31 15:59:18.793900 | orchestrator | ok: [testbed-manager] 2025-05-31 15:59:18.828904 | orchestrator | ok: [testbed-node-3] 2025-05-31 15:59:18.849376 | orchestrator | ok: [testbed-node-4] 2025-05-31 15:59:18.872332 | orchestrator | ok: [testbed-node-5] 2025-05-31 15:59:18.937906 | orchestrator | ok: [testbed-node-0] 2025-05-31 15:59:18.941013 | orchestrator | ok: [testbed-node-1] 2025-05-31 15:59:18.941043 | orchestrator | ok: [testbed-node-2] 2025-05-31 15:59:18.941055 | orchestrator | 2025-05-31 15:59:18.941069 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-05-31 15:59:18.941542 | orchestrator | Saturday 31 May 2025 15:59:18 +0000 (0:00:00.213) 0:00:55.861 ********** 2025-05-31 15:59:19.199859 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 15:59:19.200058 | orchestrator | 2025-05-31 15:59:19.200854 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-05-31 15:59:19.203868 | orchestrator | Saturday 31 May 2025 15:59:19 +0000 (0:00:00.262) 0:00:56.123 ********** 2025-05-31 15:59:20.686370 | orchestrator | ok: [testbed-manager] 2025-05-31 15:59:20.686544 | orchestrator | ok: [testbed-node-5] 2025-05-31 15:59:20.686628 | orchestrator | ok: [testbed-node-4] 2025-05-31 15:59:20.687583 | orchestrator | ok: [testbed-node-2] 2025-05-31 15:59:20.688795 | orchestrator | ok: [testbed-node-0] 2025-05-31 15:59:20.689409 | orchestrator | ok: [testbed-node-1] 2025-05-31 15:59:20.690198 | orchestrator | ok: [testbed-node-3] 2025-05-31 15:59:20.690941 | orchestrator | 2025-05-31 15:59:20.691387 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-05-31 15:59:20.692363 | orchestrator | Saturday 31 May 2025 15:59:20 +0000 (0:00:01.483) 0:00:57.607 ********** 2025-05-31 15:59:21.255000 | orchestrator | changed: [testbed-manager] 2025-05-31 15:59:21.255174 | orchestrator | changed: [testbed-node-0] 2025-05-31 15:59:21.256047 | orchestrator | changed: [testbed-node-4] 2025-05-31 15:59:21.257560 | orchestrator | changed: [testbed-node-5] 2025-05-31 15:59:21.258191 | orchestrator | changed: [testbed-node-3] 2025-05-31 15:59:21.258820 | orchestrator | changed: [testbed-node-2] 2025-05-31 15:59:21.259595 | orchestrator | changed: [testbed-node-1] 2025-05-31 15:59:21.259941 | orchestrator | 2025-05-31 15:59:21.260864 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-05-31 15:59:21.261119 | orchestrator | Saturday 31 May 2025 15:59:21 +0000 (0:00:00.570) 0:00:58.177 ********** 2025-05-31 15:59:21.326288 | orchestrator | ok: [testbed-manager] 2025-05-31 15:59:21.350271 | orchestrator | ok: [testbed-node-3] 2025-05-31 15:59:21.371626 | orchestrator | ok: [testbed-node-4] 2025-05-31 15:59:21.392606 | orchestrator | ok: [testbed-node-5] 2025-05-31 15:59:21.445483 | orchestrator | ok: [testbed-node-0] 2025-05-31 15:59:21.447544 | orchestrator | ok: [testbed-node-1] 2025-05-31 15:59:21.447868 | orchestrator | ok: [testbed-node-2] 2025-05-31 15:59:21.448074 | orchestrator | 2025-05-31 15:59:21.449987 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-05-31 15:59:21.450485 | orchestrator | Saturday 31 May 2025 15:59:21 +0000 (0:00:00.191) 0:00:58.369 ********** 2025-05-31 15:59:22.807804 | orchestrator | ok: [testbed-manager] 2025-05-31 15:59:22.807936 | orchestrator | ok: [testbed-node-4] 2025-05-31 15:59:22.808017 | orchestrator | ok: [testbed-node-1] 2025-05-31 15:59:22.808745 | orchestrator | ok: [testbed-node-3] 2025-05-31 15:59:22.808912 | orchestrator | ok: [testbed-node-0] 2025-05-31 15:59:22.811048 | orchestrator | ok: [testbed-node-2] 2025-05-31 15:59:22.811723 | orchestrator | ok: [testbed-node-5] 2025-05-31 15:59:22.816754 | orchestrator | 2025-05-31 15:59:22.817626 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-05-31 15:59:22.817647 | orchestrator | Saturday 31 May 2025 15:59:22 +0000 (0:00:01.359) 0:00:59.728 ********** 2025-05-31 15:59:24.589321 | orchestrator | ok: [testbed-manager] 2025-05-31 15:59:24.589457 | orchestrator | changed: [testbed-node-5] 2025-05-31 15:59:24.589535 | orchestrator | changed: [testbed-node-2] 2025-05-31 15:59:24.589628 | orchestrator | changed: [testbed-node-3] 2025-05-31 15:59:24.590177 | orchestrator | changed: [testbed-node-0] 2025-05-31 15:59:24.590707 | orchestrator | ok: [testbed-node-1] 2025-05-31 15:59:24.591140 | orchestrator | ok: [testbed-node-4] 2025-05-31 15:59:24.591657 | orchestrator | 2025-05-31 15:59:24.592198 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-05-31 15:59:24.592526 | orchestrator | Saturday 31 May 2025 15:59:24 +0000 (0:00:01.780) 0:01:01.509 ********** 2025-05-31 15:59:35.650573 | orchestrator | ok: [testbed-node-0] 2025-05-31 15:59:35.650690 | orchestrator | ok: [testbed-node-5] 2025-05-31 15:59:35.650705 | orchestrator | ok: [testbed-node-2] 2025-05-31 15:59:35.650715 | orchestrator | ok: [testbed-node-3] 2025-05-31 15:59:35.650786 | orchestrator | changed: [testbed-manager] 2025-05-31 15:59:35.650800 | orchestrator | changed: [testbed-node-4] 2025-05-31 15:59:35.652071 | orchestrator | changed: [testbed-node-1] 2025-05-31 15:59:35.653033 | orchestrator | 2025-05-31 15:59:35.653498 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-05-31 15:59:35.654315 | orchestrator | Saturday 31 May 2025 15:59:35 +0000 (0:00:11.060) 0:01:12.570 ********** 2025-05-31 16:00:12.516549 | orchestrator | ok: [testbed-manager] 2025-05-31 16:00:12.516711 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:00:12.516728 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:00:12.516740 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:00:12.516752 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:00:12.516835 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:00:12.517989 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:00:12.518480 | orchestrator | 2025-05-31 16:00:12.518809 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-05-31 16:00:12.519121 | orchestrator | Saturday 31 May 2025 16:00:12 +0000 (0:00:36.865) 0:01:49.436 ********** 2025-05-31 16:01:36.397342 | orchestrator | changed: [testbed-manager] 2025-05-31 16:01:36.397474 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:01:36.397489 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:01:36.397500 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:01:36.399357 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:01:36.399384 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:01:36.399396 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:01:36.399646 | orchestrator | 2025-05-31 16:01:36.400268 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-05-31 16:01:36.400601 | orchestrator | Saturday 31 May 2025 16:01:36 +0000 (0:01:23.881) 0:03:13.317 ********** 2025-05-31 16:01:38.082635 | orchestrator | ok: [testbed-manager] 2025-05-31 16:01:38.082831 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:01:38.083562 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:01:38.084273 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:01:38.084586 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:01:38.085548 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:01:38.086102 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:01:38.086388 | orchestrator | 2025-05-31 16:01:38.086907 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-05-31 16:01:38.087691 | orchestrator | Saturday 31 May 2025 16:01:38 +0000 (0:00:01.685) 0:03:15.003 ********** 2025-05-31 16:01:48.860463 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:01:48.860640 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:01:48.862186 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:01:48.862936 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:01:48.865679 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:01:48.865899 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:01:48.866324 | orchestrator | changed: [testbed-manager] 2025-05-31 16:01:48.866851 | orchestrator | 2025-05-31 16:01:48.867404 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-05-31 16:01:48.868888 | orchestrator | Saturday 31 May 2025 16:01:48 +0000 (0:00:10.774) 0:03:25.777 ********** 2025-05-31 16:01:49.198282 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-05-31 16:01:49.198844 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-05-31 16:01:49.200080 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-05-31 16:01:49.202856 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-05-31 16:01:49.202913 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-05-31 16:01:49.202927 | orchestrator | 2025-05-31 16:01:49.204022 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-05-31 16:01:49.204408 | orchestrator | Saturday 31 May 2025 16:01:49 +0000 (0:00:00.344) 0:03:26.122 ********** 2025-05-31 16:01:49.255407 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-31 16:01:49.280093 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:01:49.283601 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-31 16:01:49.304894 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:01:49.305011 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-31 16:01:49.335279 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:01:49.335340 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-31 16:01:49.363384 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:01:49.893073 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-31 16:01:49.893234 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-31 16:01:49.893743 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-31 16:01:49.894678 | orchestrator | 2025-05-31 16:01:49.895297 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-05-31 16:01:49.895947 | orchestrator | Saturday 31 May 2025 16:01:49 +0000 (0:00:00.693) 0:03:26.815 ********** 2025-05-31 16:01:49.976364 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-31 16:01:49.976646 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-31 16:01:49.979361 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-31 16:01:49.979737 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-31 16:01:49.979999 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-31 16:01:49.980669 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-31 16:01:49.983140 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-31 16:01:49.983181 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-31 16:01:49.983317 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-31 16:01:49.983339 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-31 16:01:49.983712 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-31 16:01:49.984035 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-31 16:01:49.984301 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-31 16:01:49.984727 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-31 16:01:49.985104 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-31 16:01:49.987988 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-31 16:01:49.990187 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-31 16:01:50.004156 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:01:50.004315 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-31 16:01:50.004789 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-31 16:01:50.005043 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-31 16:01:50.035320 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:01:50.036302 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-31 16:01:50.036325 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-31 16:01:50.036413 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-31 16:01:50.037015 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-31 16:01:50.040136 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-31 16:01:50.040360 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-31 16:01:50.040655 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-31 16:01:50.041198 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-31 16:01:50.041560 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-31 16:01:50.042010 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-31 16:01:50.069504 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-31 16:01:50.069631 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-31 16:01:50.073783 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-31 16:01:50.074277 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-31 16:01:50.075475 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-31 16:01:50.075498 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-31 16:01:50.075510 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-31 16:01:50.075522 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-31 16:01:50.096800 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-31 16:01:50.096839 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:01:50.096853 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-31 16:01:53.752815 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:01:53.752977 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-31 16:01:53.753760 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-31 16:01:53.753973 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-31 16:01:53.755354 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-31 16:01:53.755869 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-31 16:01:53.756430 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-31 16:01:53.756990 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-31 16:01:53.757720 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-31 16:01:53.758226 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-31 16:01:53.758916 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-31 16:01:53.759279 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-31 16:01:53.759737 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-31 16:01:53.760368 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-31 16:01:53.760885 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-31 16:01:53.761687 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-31 16:01:53.762097 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-31 16:01:53.762423 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-31 16:01:53.763008 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-31 16:01:53.763203 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-31 16:01:53.763678 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-31 16:01:53.764089 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-31 16:01:53.765411 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-31 16:01:53.765646 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-31 16:01:53.766108 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-31 16:01:53.766329 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-31 16:01:53.766695 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-31 16:01:53.766997 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-31 16:01:53.767206 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-31 16:01:53.767472 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-31 16:01:53.768160 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-31 16:01:53.768186 | orchestrator | 2025-05-31 16:01:53.768485 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-05-31 16:01:53.768646 | orchestrator | Saturday 31 May 2025 16:01:53 +0000 (0:00:03.857) 0:03:30.673 ********** 2025-05-31 16:01:54.351790 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-31 16:01:54.352006 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-31 16:01:54.352160 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-31 16:01:54.353048 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-31 16:01:54.353461 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-31 16:01:54.354803 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-31 16:01:54.354935 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-31 16:01:54.356401 | orchestrator | 2025-05-31 16:01:54.356661 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-05-31 16:01:54.357218 | orchestrator | Saturday 31 May 2025 16:01:54 +0000 (0:00:00.601) 0:03:31.275 ********** 2025-05-31 16:01:54.404274 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-31 16:01:54.428074 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:01:54.509076 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-31 16:01:54.948492 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:01:54.951097 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-31 16:01:54.951143 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:01:54.951434 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-31 16:01:54.952323 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:01:54.952956 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-31 16:01:54.954258 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-31 16:01:54.954720 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-31 16:01:54.955978 | orchestrator | 2025-05-31 16:01:54.956442 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-05-31 16:01:54.956797 | orchestrator | Saturday 31 May 2025 16:01:54 +0000 (0:00:00.595) 0:03:31.871 ********** 2025-05-31 16:01:55.001647 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-31 16:01:55.021811 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:01:55.103505 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-31 16:01:55.103716 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-31 16:01:55.475930 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:01:55.477468 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:01:55.478639 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-31 16:01:55.479741 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:01:55.481625 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-31 16:01:55.481681 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-31 16:01:55.482463 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-31 16:01:55.483346 | orchestrator | 2025-05-31 16:01:55.483936 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-05-31 16:01:55.484736 | orchestrator | Saturday 31 May 2025 16:01:55 +0000 (0:00:00.528) 0:03:32.399 ********** 2025-05-31 16:01:55.533200 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:01:55.554474 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:01:55.578232 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:01:55.600952 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:01:55.621677 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:01:55.731720 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:01:55.732381 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:01:55.733573 | orchestrator | 2025-05-31 16:01:55.734636 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-05-31 16:01:55.735831 | orchestrator | Saturday 31 May 2025 16:01:55 +0000 (0:00:00.254) 0:03:32.654 ********** 2025-05-31 16:02:01.614941 | orchestrator | ok: [testbed-manager] 2025-05-31 16:02:01.615218 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:02:01.615906 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:02:01.616923 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:02:01.617690 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:02:01.618572 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:02:01.619684 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:02:01.620744 | orchestrator | 2025-05-31 16:02:01.621668 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-05-31 16:02:01.622339 | orchestrator | Saturday 31 May 2025 16:02:01 +0000 (0:00:05.884) 0:03:38.539 ********** 2025-05-31 16:02:01.679067 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-05-31 16:02:01.717743 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:02:01.718288 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-05-31 16:02:01.720174 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-05-31 16:02:01.747329 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:02:01.790412 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:02:01.791271 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-05-31 16:02:01.792720 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-05-31 16:02:01.825931 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:02:01.826966 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-05-31 16:02:01.882851 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:02:01.883661 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:02:01.884672 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-05-31 16:02:01.885696 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:02:01.886159 | orchestrator | 2025-05-31 16:02:01.886876 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-05-31 16:02:01.887762 | orchestrator | Saturday 31 May 2025 16:02:01 +0000 (0:00:00.268) 0:03:38.807 ********** 2025-05-31 16:02:02.861294 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-05-31 16:02:02.861944 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-05-31 16:02:02.863072 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-05-31 16:02:02.864249 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-05-31 16:02:02.864681 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-05-31 16:02:02.865613 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-05-31 16:02:02.866427 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-05-31 16:02:02.867375 | orchestrator | 2025-05-31 16:02:02.868045 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-05-31 16:02:02.868743 | orchestrator | Saturday 31 May 2025 16:02:02 +0000 (0:00:00.976) 0:03:39.783 ********** 2025-05-31 16:02:03.240749 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:02:03.241699 | orchestrator | 2025-05-31 16:02:03.245135 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-05-31 16:02:03.245213 | orchestrator | Saturday 31 May 2025 16:02:03 +0000 (0:00:00.380) 0:03:40.164 ********** 2025-05-31 16:02:04.537926 | orchestrator | ok: [testbed-manager] 2025-05-31 16:02:04.538083 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:02:04.538951 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:02:04.539596 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:02:04.540675 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:02:04.541870 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:02:04.542851 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:02:04.543719 | orchestrator | 2025-05-31 16:02:04.544399 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-05-31 16:02:04.544791 | orchestrator | Saturday 31 May 2025 16:02:04 +0000 (0:00:01.294) 0:03:41.459 ********** 2025-05-31 16:02:05.151221 | orchestrator | ok: [testbed-manager] 2025-05-31 16:02:05.151322 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:02:05.152943 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:02:05.153630 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:02:05.154003 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:02:05.154982 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:02:05.155301 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:02:05.157741 | orchestrator | 2025-05-31 16:02:05.157778 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-05-31 16:02:05.159459 | orchestrator | Saturday 31 May 2025 16:02:05 +0000 (0:00:00.612) 0:03:42.072 ********** 2025-05-31 16:02:05.758189 | orchestrator | changed: [testbed-manager] 2025-05-31 16:02:05.758569 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:02:05.758603 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:02:05.759412 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:02:05.760194 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:02:05.760999 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:02:05.761598 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:02:05.762176 | orchestrator | 2025-05-31 16:02:05.762752 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-05-31 16:02:05.763299 | orchestrator | Saturday 31 May 2025 16:02:05 +0000 (0:00:00.608) 0:03:42.680 ********** 2025-05-31 16:02:06.370734 | orchestrator | ok: [testbed-manager] 2025-05-31 16:02:06.370893 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:02:06.373033 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:02:06.374116 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:02:06.375244 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:02:06.376155 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:02:06.377103 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:02:06.377312 | orchestrator | 2025-05-31 16:02:06.378280 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-05-31 16:02:06.378759 | orchestrator | Saturday 31 May 2025 16:02:06 +0000 (0:00:00.614) 0:03:43.294 ********** 2025-05-31 16:02:07.291318 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748705410.1078033, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 16:02:07.291482 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748705454.7841797, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 16:02:07.291905 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748705470.851084, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 16:02:07.292255 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748705469.4077265, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 16:02:07.292690 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748705465.2619033, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 16:02:07.293197 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748705446.4943974, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 16:02:07.293845 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748705453.2113948, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 16:02:07.294365 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748705438.1849866, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 16:02:07.294646 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748705370.6238515, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 16:02:07.295159 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748705368.7689786, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 16:02:07.295706 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748705365.7054138, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 16:02:07.296827 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748705360.7441027, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 16:02:07.296944 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748705366.5234385, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 16:02:07.297343 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748705361.3261814, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 16:02:07.297655 | orchestrator | 2025-05-31 16:02:07.298060 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-05-31 16:02:07.298344 | orchestrator | Saturday 31 May 2025 16:02:07 +0000 (0:00:00.920) 0:03:44.215 ********** 2025-05-31 16:02:08.407577 | orchestrator | changed: [testbed-manager] 2025-05-31 16:02:08.409670 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:02:08.409705 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:02:08.409984 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:02:08.411031 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:02:08.411401 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:02:08.412018 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:02:08.412573 | orchestrator | 2025-05-31 16:02:08.413235 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-05-31 16:02:08.413700 | orchestrator | Saturday 31 May 2025 16:02:08 +0000 (0:00:01.114) 0:03:45.330 ********** 2025-05-31 16:02:09.483577 | orchestrator | changed: [testbed-manager] 2025-05-31 16:02:09.483718 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:02:09.483734 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:02:09.483746 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:02:09.483757 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:02:09.483831 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:02:09.484306 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:02:09.484668 | orchestrator | 2025-05-31 16:02:09.485120 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-05-31 16:02:09.485430 | orchestrator | Saturday 31 May 2025 16:02:09 +0000 (0:00:01.073) 0:03:46.403 ********** 2025-05-31 16:02:09.574154 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:02:09.611437 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:02:09.642187 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:02:09.670813 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:02:09.730638 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:02:09.731274 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:02:09.732776 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:02:09.734485 | orchestrator | 2025-05-31 16:02:09.735272 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-05-31 16:02:09.736576 | orchestrator | Saturday 31 May 2025 16:02:09 +0000 (0:00:00.250) 0:03:46.653 ********** 2025-05-31 16:02:10.421423 | orchestrator | ok: [testbed-manager] 2025-05-31 16:02:10.421643 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:02:10.421795 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:02:10.422160 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:02:10.422410 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:02:10.422895 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:02:10.423314 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:02:10.423916 | orchestrator | 2025-05-31 16:02:10.424379 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-05-31 16:02:10.424983 | orchestrator | Saturday 31 May 2025 16:02:10 +0000 (0:00:00.690) 0:03:47.344 ********** 2025-05-31 16:02:10.786757 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:02:10.787239 | orchestrator | 2025-05-31 16:02:10.788202 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-05-31 16:02:10.788725 | orchestrator | Saturday 31 May 2025 16:02:10 +0000 (0:00:00.366) 0:03:47.711 ********** 2025-05-31 16:02:18.522296 | orchestrator | ok: [testbed-manager] 2025-05-31 16:02:18.522466 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:02:18.523674 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:02:18.524050 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:02:18.525013 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:02:18.525778 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:02:18.526782 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:02:18.527501 | orchestrator | 2025-05-31 16:02:18.528136 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-05-31 16:02:18.528672 | orchestrator | Saturday 31 May 2025 16:02:18 +0000 (0:00:07.730) 0:03:55.441 ********** 2025-05-31 16:02:19.679898 | orchestrator | ok: [testbed-manager] 2025-05-31 16:02:19.680064 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:02:19.680422 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:02:19.681396 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:02:19.682118 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:02:19.682840 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:02:19.683907 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:02:19.684423 | orchestrator | 2025-05-31 16:02:19.685204 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-05-31 16:02:19.685630 | orchestrator | Saturday 31 May 2025 16:02:19 +0000 (0:00:01.160) 0:03:56.602 ********** 2025-05-31 16:02:20.630814 | orchestrator | ok: [testbed-manager] 2025-05-31 16:02:20.631593 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:02:20.632339 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:02:20.633397 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:02:20.634209 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:02:20.635302 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:02:20.635785 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:02:20.636473 | orchestrator | 2025-05-31 16:02:20.636957 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-05-31 16:02:20.637519 | orchestrator | Saturday 31 May 2025 16:02:20 +0000 (0:00:00.950) 0:03:57.552 ********** 2025-05-31 16:02:20.976146 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:02:20.976762 | orchestrator | 2025-05-31 16:02:20.977649 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-05-31 16:02:20.978762 | orchestrator | Saturday 31 May 2025 16:02:20 +0000 (0:00:00.346) 0:03:57.899 ********** 2025-05-31 16:02:29.232013 | orchestrator | changed: [testbed-manager] 2025-05-31 16:02:29.232188 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:02:29.233848 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:02:29.235238 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:02:29.236067 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:02:29.237169 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:02:29.237917 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:02:29.238825 | orchestrator | 2025-05-31 16:02:29.239167 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-05-31 16:02:29.239951 | orchestrator | Saturday 31 May 2025 16:02:29 +0000 (0:00:08.255) 0:04:06.155 ********** 2025-05-31 16:02:29.929344 | orchestrator | changed: [testbed-manager] 2025-05-31 16:02:29.929549 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:02:29.930495 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:02:29.931649 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:02:29.932385 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:02:29.933341 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:02:29.933774 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:02:29.934682 | orchestrator | 2025-05-31 16:02:29.935305 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-05-31 16:02:29.935912 | orchestrator | Saturday 31 May 2025 16:02:29 +0000 (0:00:00.695) 0:04:06.850 ********** 2025-05-31 16:02:31.008968 | orchestrator | changed: [testbed-manager] 2025-05-31 16:02:31.010181 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:02:31.010505 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:02:31.012324 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:02:31.012349 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:02:31.013009 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:02:31.013626 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:02:31.014263 | orchestrator | 2025-05-31 16:02:31.014902 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-05-31 16:02:31.015682 | orchestrator | Saturday 31 May 2025 16:02:31 +0000 (0:00:01.081) 0:04:07.931 ********** 2025-05-31 16:02:32.038847 | orchestrator | changed: [testbed-manager] 2025-05-31 16:02:32.043028 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:02:32.043978 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:02:32.044801 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:02:32.045456 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:02:32.046180 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:02:32.046766 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:02:32.047054 | orchestrator | 2025-05-31 16:02:32.047928 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-05-31 16:02:32.048433 | orchestrator | Saturday 31 May 2025 16:02:32 +0000 (0:00:01.025) 0:04:08.957 ********** 2025-05-31 16:02:32.135914 | orchestrator | ok: [testbed-manager] 2025-05-31 16:02:32.165757 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:02:32.200489 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:02:32.229942 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:02:32.288984 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:02:32.289078 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:02:32.289300 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:02:32.289745 | orchestrator | 2025-05-31 16:02:32.292019 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-05-31 16:02:32.293304 | orchestrator | Saturday 31 May 2025 16:02:32 +0000 (0:00:00.255) 0:04:09.212 ********** 2025-05-31 16:02:32.372405 | orchestrator | ok: [testbed-manager] 2025-05-31 16:02:32.403633 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:02:32.437787 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:02:32.471088 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:02:32.500100 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:02:32.578721 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:02:32.578882 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:02:32.580215 | orchestrator | 2025-05-31 16:02:32.581014 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-05-31 16:02:32.582144 | orchestrator | Saturday 31 May 2025 16:02:32 +0000 (0:00:00.289) 0:04:09.501 ********** 2025-05-31 16:02:32.678646 | orchestrator | ok: [testbed-manager] 2025-05-31 16:02:32.714661 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:02:32.744076 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:02:32.781635 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:02:32.858508 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:02:32.860236 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:02:32.860258 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:02:32.861001 | orchestrator | 2025-05-31 16:02:32.861020 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-05-31 16:02:32.861173 | orchestrator | Saturday 31 May 2025 16:02:32 +0000 (0:00:00.281) 0:04:09.783 ********** 2025-05-31 16:02:38.794063 | orchestrator | ok: [testbed-manager] 2025-05-31 16:02:38.794689 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:02:38.795710 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:02:38.796291 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:02:38.797359 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:02:38.798213 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:02:38.801132 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:02:38.801152 | orchestrator | 2025-05-31 16:02:38.801583 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-05-31 16:02:38.802003 | orchestrator | Saturday 31 May 2025 16:02:38 +0000 (0:00:05.934) 0:04:15.717 ********** 2025-05-31 16:02:39.146674 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:02:39.146849 | orchestrator | 2025-05-31 16:02:39.153167 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-05-31 16:02:39.153191 | orchestrator | Saturday 31 May 2025 16:02:39 +0000 (0:00:00.352) 0:04:16.069 ********** 2025-05-31 16:02:39.218514 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-05-31 16:02:39.219101 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-05-31 16:02:39.220013 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-05-31 16:02:39.257768 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-05-31 16:02:39.258190 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:02:39.259732 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-05-31 16:02:39.260160 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-05-31 16:02:39.307100 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:02:39.308375 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-05-31 16:02:39.309234 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-05-31 16:02:39.345629 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:02:39.405130 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:02:39.408757 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-05-31 16:02:39.409140 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-05-31 16:02:39.411686 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-05-31 16:02:39.485392 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:02:39.487459 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-05-31 16:02:39.491341 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:02:39.491406 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-05-31 16:02:39.491422 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-05-31 16:02:39.491434 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:02:39.491749 | orchestrator | 2025-05-31 16:02:39.492731 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-05-31 16:02:39.493775 | orchestrator | Saturday 31 May 2025 16:02:39 +0000 (0:00:00.340) 0:04:16.410 ********** 2025-05-31 16:02:39.853905 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:02:39.854004 | orchestrator | 2025-05-31 16:02:39.856847 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-05-31 16:02:39.857225 | orchestrator | Saturday 31 May 2025 16:02:39 +0000 (0:00:00.366) 0:04:16.776 ********** 2025-05-31 16:02:39.929236 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-05-31 16:02:39.964854 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-05-31 16:02:39.965652 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:02:39.966096 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-05-31 16:02:39.995291 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:02:40.031376 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-05-31 16:02:40.031426 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:02:40.078195 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-05-31 16:02:40.078926 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:02:40.079269 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-05-31 16:02:40.144331 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:02:40.144417 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:02:40.145027 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-05-31 16:02:40.147179 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:02:40.147217 | orchestrator | 2025-05-31 16:02:40.148127 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-05-31 16:02:40.148975 | orchestrator | Saturday 31 May 2025 16:02:40 +0000 (0:00:00.291) 0:04:17.068 ********** 2025-05-31 16:02:40.494592 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:02:40.494711 | orchestrator | 2025-05-31 16:02:40.494788 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-05-31 16:02:40.496150 | orchestrator | Saturday 31 May 2025 16:02:40 +0000 (0:00:00.349) 0:04:17.417 ********** 2025-05-31 16:03:15.227459 | orchestrator | changed: [testbed-manager] 2025-05-31 16:03:15.227573 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:03:15.227685 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:03:15.227814 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:03:15.231070 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:03:15.232110 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:03:15.232418 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:03:15.233208 | orchestrator | 2025-05-31 16:03:15.233683 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-05-31 16:03:15.234083 | orchestrator | Saturday 31 May 2025 16:03:15 +0000 (0:00:34.731) 0:04:52.149 ********** 2025-05-31 16:03:23.324402 | orchestrator | changed: [testbed-manager] 2025-05-31 16:03:23.325090 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:03:23.327985 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:03:23.329324 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:03:23.330099 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:03:23.330563 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:03:23.331380 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:03:23.332324 | orchestrator | 2025-05-31 16:03:23.332772 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-05-31 16:03:23.333505 | orchestrator | Saturday 31 May 2025 16:03:23 +0000 (0:00:08.095) 0:05:00.244 ********** 2025-05-31 16:03:30.799917 | orchestrator | changed: [testbed-manager] 2025-05-31 16:03:30.800031 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:03:30.800492 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:03:30.801970 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:03:30.803386 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:03:30.804217 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:03:30.805546 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:03:30.807018 | orchestrator | 2025-05-31 16:03:30.807553 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-05-31 16:03:30.809123 | orchestrator | Saturday 31 May 2025 16:03:30 +0000 (0:00:07.477) 0:05:07.722 ********** 2025-05-31 16:03:32.465515 | orchestrator | ok: [testbed-manager] 2025-05-31 16:03:32.466209 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:03:32.466832 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:03:32.467814 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:03:32.469206 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:03:32.469230 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:03:32.469643 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:03:32.470216 | orchestrator | 2025-05-31 16:03:32.471138 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-05-31 16:03:32.471205 | orchestrator | Saturday 31 May 2025 16:03:32 +0000 (0:00:01.665) 0:05:09.387 ********** 2025-05-31 16:03:38.196077 | orchestrator | changed: [testbed-manager] 2025-05-31 16:03:38.197068 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:03:38.197367 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:03:38.197839 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:03:38.198346 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:03:38.198770 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:03:38.200983 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:03:38.202345 | orchestrator | 2025-05-31 16:03:38.203042 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-05-31 16:03:38.203499 | orchestrator | Saturday 31 May 2025 16:03:38 +0000 (0:00:05.731) 0:05:15.118 ********** 2025-05-31 16:03:38.567779 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:03:38.568237 | orchestrator | 2025-05-31 16:03:38.568793 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-05-31 16:03:38.572108 | orchestrator | Saturday 31 May 2025 16:03:38 +0000 (0:00:00.371) 0:05:15.490 ********** 2025-05-31 16:03:39.259264 | orchestrator | changed: [testbed-manager] 2025-05-31 16:03:39.266679 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:03:39.266769 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:03:39.266784 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:03:39.266827 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:03:39.267286 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:03:39.268756 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:03:39.270886 | orchestrator | 2025-05-31 16:03:39.271503 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-05-31 16:03:39.272103 | orchestrator | Saturday 31 May 2025 16:03:39 +0000 (0:00:00.690) 0:05:16.181 ********** 2025-05-31 16:03:40.921556 | orchestrator | ok: [testbed-manager] 2025-05-31 16:03:40.922236 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:03:40.922586 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:03:40.923938 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:03:40.924605 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:03:40.925522 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:03:40.926219 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:03:40.927048 | orchestrator | 2025-05-31 16:03:40.927770 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-05-31 16:03:40.928199 | orchestrator | Saturday 31 May 2025 16:03:40 +0000 (0:00:01.662) 0:05:17.843 ********** 2025-05-31 16:03:41.656137 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:03:41.656319 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:03:41.657158 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:03:41.657917 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:03:41.658686 | orchestrator | changed: [testbed-manager] 2025-05-31 16:03:41.659658 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:03:41.660464 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:03:41.661369 | orchestrator | 2025-05-31 16:03:41.662410 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-05-31 16:03:41.662955 | orchestrator | Saturday 31 May 2025 16:03:41 +0000 (0:00:00.735) 0:05:18.578 ********** 2025-05-31 16:03:41.763094 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:03:41.800247 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:03:41.834971 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:03:41.866772 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:03:41.931969 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:03:41.932847 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:03:41.933681 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:03:41.936995 | orchestrator | 2025-05-31 16:03:41.937023 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-05-31 16:03:41.937036 | orchestrator | Saturday 31 May 2025 16:03:41 +0000 (0:00:00.276) 0:05:18.855 ********** 2025-05-31 16:03:41.991984 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:03:42.020235 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:03:42.059867 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:03:42.099314 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:03:42.129584 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:03:42.305768 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:03:42.305987 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:03:42.309560 | orchestrator | 2025-05-31 16:03:42.310837 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-05-31 16:03:42.311434 | orchestrator | Saturday 31 May 2025 16:03:42 +0000 (0:00:00.372) 0:05:19.227 ********** 2025-05-31 16:03:42.407251 | orchestrator | ok: [testbed-manager] 2025-05-31 16:03:42.438545 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:03:42.481004 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:03:42.531401 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:03:42.600803 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:03:42.603075 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:03:42.603115 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:03:42.603137 | orchestrator | 2025-05-31 16:03:42.603160 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-05-31 16:03:42.603275 | orchestrator | Saturday 31 May 2025 16:03:42 +0000 (0:00:00.294) 0:05:19.522 ********** 2025-05-31 16:03:42.700569 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:03:42.732358 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:03:42.765152 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:03:42.794884 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:03:42.846341 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:03:42.846837 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:03:42.847277 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:03:42.847887 | orchestrator | 2025-05-31 16:03:42.848881 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-05-31 16:03:42.848919 | orchestrator | Saturday 31 May 2025 16:03:42 +0000 (0:00:00.248) 0:05:19.771 ********** 2025-05-31 16:03:42.937033 | orchestrator | ok: [testbed-manager] 2025-05-31 16:03:42.983706 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:03:43.016563 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:03:43.057976 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:03:43.127511 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:03:43.127599 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:03:43.128226 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:03:43.129628 | orchestrator | 2025-05-31 16:03:43.130347 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-05-31 16:03:43.130799 | orchestrator | Saturday 31 May 2025 16:03:43 +0000 (0:00:00.279) 0:05:20.051 ********** 2025-05-31 16:03:43.222926 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:03:43.251206 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:03:43.283928 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:03:43.310350 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:03:43.371996 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:03:43.372493 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:03:43.373741 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:03:43.374333 | orchestrator | 2025-05-31 16:03:43.375289 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-05-31 16:03:43.376547 | orchestrator | Saturday 31 May 2025 16:03:43 +0000 (0:00:00.244) 0:05:20.295 ********** 2025-05-31 16:03:43.478398 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:03:43.506779 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:03:43.540190 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:03:43.571396 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:03:43.622862 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:03:43.623045 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:03:43.623803 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:03:43.625198 | orchestrator | 2025-05-31 16:03:43.625592 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-05-31 16:03:43.626290 | orchestrator | Saturday 31 May 2025 16:03:43 +0000 (0:00:00.251) 0:05:20.546 ********** 2025-05-31 16:03:44.083138 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:03:44.083292 | orchestrator | 2025-05-31 16:03:44.083404 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-05-31 16:03:44.084141 | orchestrator | Saturday 31 May 2025 16:03:44 +0000 (0:00:00.459) 0:05:21.006 ********** 2025-05-31 16:03:44.914141 | orchestrator | ok: [testbed-manager] 2025-05-31 16:03:44.914249 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:03:44.916788 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:03:44.917196 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:03:44.919081 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:03:44.921899 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:03:44.921938 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:03:44.921950 | orchestrator | 2025-05-31 16:03:44.921964 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-05-31 16:03:44.921977 | orchestrator | Saturday 31 May 2025 16:03:44 +0000 (0:00:00.826) 0:05:21.832 ********** 2025-05-31 16:03:47.585457 | orchestrator | ok: [testbed-manager] 2025-05-31 16:03:47.585729 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:03:47.586445 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:03:47.587348 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:03:47.588218 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:03:47.588839 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:03:47.589664 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:03:47.590419 | orchestrator | 2025-05-31 16:03:47.591038 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-05-31 16:03:47.591759 | orchestrator | Saturday 31 May 2025 16:03:47 +0000 (0:00:02.676) 0:05:24.508 ********** 2025-05-31 16:03:47.655910 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-05-31 16:03:47.656128 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-05-31 16:03:47.724687 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-05-31 16:03:47.725140 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-05-31 16:03:47.725641 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-05-31 16:03:47.727431 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-05-31 16:03:47.791514 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:03:47.791606 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-05-31 16:03:47.792518 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-05-31 16:03:47.793039 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-05-31 16:03:47.862893 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:03:47.862948 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-05-31 16:03:47.863044 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-05-31 16:03:47.863114 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-05-31 16:03:47.925854 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:03:47.925991 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-05-31 16:03:47.926899 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-05-31 16:03:47.930262 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-05-31 16:03:47.988921 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:03:47.989242 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-05-31 16:03:47.990111 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-05-31 16:03:48.122222 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:03:48.122554 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-05-31 16:03:48.123362 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:03:48.127063 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-05-31 16:03:48.127088 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-05-31 16:03:48.127099 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-05-31 16:03:48.127111 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:03:48.127123 | orchestrator | 2025-05-31 16:03:48.127136 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-05-31 16:03:48.127618 | orchestrator | Saturday 31 May 2025 16:03:48 +0000 (0:00:00.537) 0:05:25.046 ********** 2025-05-31 16:04:00.065318 | orchestrator | ok: [testbed-manager] 2025-05-31 16:04:00.068289 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:04:00.072850 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:04:00.072877 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:04:00.074084 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:04:00.074757 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:04:00.075408 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:04:00.075840 | orchestrator | 2025-05-31 16:04:00.076405 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-05-31 16:04:00.076787 | orchestrator | Saturday 31 May 2025 16:04:00 +0000 (0:00:11.937) 0:05:36.984 ********** 2025-05-31 16:04:01.192793 | orchestrator | ok: [testbed-manager] 2025-05-31 16:04:01.192973 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:04:01.193893 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:04:01.195876 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:04:01.195901 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:04:01.195944 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:04:01.196557 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:04:01.197280 | orchestrator | 2025-05-31 16:04:01.197875 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-05-31 16:04:01.198516 | orchestrator | Saturday 31 May 2025 16:04:01 +0000 (0:00:01.129) 0:05:38.114 ********** 2025-05-31 16:04:08.869803 | orchestrator | ok: [testbed-manager] 2025-05-31 16:04:08.870826 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:04:08.872063 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:04:08.873882 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:04:08.874839 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:04:08.875809 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:04:08.876915 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:04:08.877479 | orchestrator | 2025-05-31 16:04:08.878335 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-05-31 16:04:08.878945 | orchestrator | Saturday 31 May 2025 16:04:08 +0000 (0:00:07.677) 0:05:45.792 ********** 2025-05-31 16:04:12.123272 | orchestrator | changed: [testbed-manager] 2025-05-31 16:04:12.123835 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:04:12.124902 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:04:12.127083 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:04:12.127990 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:04:12.128902 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:04:12.129575 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:04:12.130655 | orchestrator | 2025-05-31 16:04:12.131055 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-05-31 16:04:12.131806 | orchestrator | Saturday 31 May 2025 16:04:12 +0000 (0:00:03.253) 0:05:49.045 ********** 2025-05-31 16:04:13.415414 | orchestrator | ok: [testbed-manager] 2025-05-31 16:04:13.415661 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:04:13.417192 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:04:13.417717 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:04:13.418249 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:04:13.419912 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:04:13.419935 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:04:13.420893 | orchestrator | 2025-05-31 16:04:13.420917 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-05-31 16:04:13.421103 | orchestrator | Saturday 31 May 2025 16:04:13 +0000 (0:00:01.291) 0:05:50.336 ********** 2025-05-31 16:04:14.876446 | orchestrator | ok: [testbed-manager] 2025-05-31 16:04:14.876647 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:04:14.878240 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:04:14.878730 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:04:14.879374 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:04:14.880462 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:04:14.881115 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:04:14.881910 | orchestrator | 2025-05-31 16:04:14.882958 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-05-31 16:04:14.883008 | orchestrator | Saturday 31 May 2025 16:04:14 +0000 (0:00:01.461) 0:05:51.798 ********** 2025-05-31 16:04:15.097434 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:04:15.163839 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:04:15.226141 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:04:15.291352 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:04:15.451388 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:04:15.451777 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:04:15.453163 | orchestrator | changed: [testbed-manager] 2025-05-31 16:04:15.453863 | orchestrator | 2025-05-31 16:04:15.456176 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-05-31 16:04:15.456292 | orchestrator | Saturday 31 May 2025 16:04:15 +0000 (0:00:00.575) 0:05:52.373 ********** 2025-05-31 16:04:25.086571 | orchestrator | ok: [testbed-manager] 2025-05-31 16:04:25.086808 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:04:25.087591 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:04:25.088807 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:04:25.089951 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:04:25.090836 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:04:25.091108 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:04:25.093492 | orchestrator | 2025-05-31 16:04:25.093648 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-05-31 16:04:25.093931 | orchestrator | Saturday 31 May 2025 16:04:25 +0000 (0:00:09.634) 0:06:02.007 ********** 2025-05-31 16:04:25.978531 | orchestrator | changed: [testbed-manager] 2025-05-31 16:04:25.978634 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:04:25.980324 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:04:25.981438 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:04:25.982822 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:04:25.983309 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:04:25.984263 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:04:25.985105 | orchestrator | 2025-05-31 16:04:25.986308 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-05-31 16:04:25.986915 | orchestrator | Saturday 31 May 2025 16:04:25 +0000 (0:00:00.894) 0:06:02.901 ********** 2025-05-31 16:04:38.616960 | orchestrator | ok: [testbed-manager] 2025-05-31 16:04:38.617110 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:04:38.617241 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:04:38.621105 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:04:38.621157 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:04:38.621171 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:04:38.621183 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:04:38.621195 | orchestrator | 2025-05-31 16:04:38.622586 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-05-31 16:04:38.622687 | orchestrator | Saturday 31 May 2025 16:04:38 +0000 (0:00:12.635) 0:06:15.537 ********** 2025-05-31 16:04:51.428199 | orchestrator | ok: [testbed-manager] 2025-05-31 16:04:51.428321 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:04:51.428338 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:04:51.428351 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:04:51.428425 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:04:51.429067 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:04:51.429450 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:04:51.429953 | orchestrator | 2025-05-31 16:04:51.430563 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-05-31 16:04:51.431204 | orchestrator | Saturday 31 May 2025 16:04:51 +0000 (0:00:12.806) 0:06:28.344 ********** 2025-05-31 16:04:51.842636 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-05-31 16:04:52.593626 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-05-31 16:04:52.594134 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-05-31 16:04:52.596341 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-05-31 16:04:52.596378 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-05-31 16:04:52.597226 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-05-31 16:04:52.597959 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-05-31 16:04:52.598778 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-05-31 16:04:52.599413 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-05-31 16:04:52.600422 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-05-31 16:04:52.601038 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-05-31 16:04:52.601763 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-05-31 16:04:52.602394 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-05-31 16:04:52.602921 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-05-31 16:04:52.603529 | orchestrator | 2025-05-31 16:04:52.604201 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-05-31 16:04:52.605107 | orchestrator | Saturday 31 May 2025 16:04:52 +0000 (0:00:01.171) 0:06:29.516 ********** 2025-05-31 16:04:52.723488 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:04:52.791467 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:04:52.851866 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:04:52.914169 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:04:52.981939 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:04:53.100218 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:04:53.100972 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:04:53.101702 | orchestrator | 2025-05-31 16:04:53.102427 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-05-31 16:04:53.103045 | orchestrator | Saturday 31 May 2025 16:04:53 +0000 (0:00:00.506) 0:06:30.022 ********** 2025-05-31 16:04:56.774822 | orchestrator | ok: [testbed-manager] 2025-05-31 16:04:56.775588 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:04:56.777264 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:04:56.778256 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:04:56.780200 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:04:56.781094 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:04:56.782550 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:04:56.782878 | orchestrator | 2025-05-31 16:04:56.783710 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-05-31 16:04:56.784302 | orchestrator | Saturday 31 May 2025 16:04:56 +0000 (0:00:03.672) 0:06:33.695 ********** 2025-05-31 16:04:56.912409 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:04:56.971728 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:04:57.037230 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:04:57.245293 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:04:57.306938 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:04:57.400659 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:04:57.401901 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:04:57.405927 | orchestrator | 2025-05-31 16:04:57.405962 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-05-31 16:04:57.405976 | orchestrator | Saturday 31 May 2025 16:04:57 +0000 (0:00:00.627) 0:06:34.322 ********** 2025-05-31 16:04:57.473625 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-05-31 16:04:57.473836 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-05-31 16:04:57.541349 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:04:57.542159 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-05-31 16:04:57.545952 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-05-31 16:04:57.606826 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:04:57.607902 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-05-31 16:04:57.608535 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-05-31 16:04:57.676547 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:04:57.677504 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-05-31 16:04:57.679050 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-05-31 16:04:57.738406 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:04:57.739124 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-05-31 16:04:57.739616 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-05-31 16:04:57.805731 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:04:57.806996 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-05-31 16:04:57.809020 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-05-31 16:04:57.916699 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:04:57.918208 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-05-31 16:04:57.919210 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-05-31 16:04:57.920289 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:04:57.920834 | orchestrator | 2025-05-31 16:04:57.921286 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-05-31 16:04:57.921879 | orchestrator | Saturday 31 May 2025 16:04:57 +0000 (0:00:00.518) 0:06:34.841 ********** 2025-05-31 16:04:58.037131 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:04:58.102609 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:04:58.160559 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:04:58.222006 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:04:58.288254 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:04:58.382607 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:04:58.382823 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:04:58.383074 | orchestrator | 2025-05-31 16:04:58.383700 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-05-31 16:04:58.384079 | orchestrator | Saturday 31 May 2025 16:04:58 +0000 (0:00:00.464) 0:06:35.305 ********** 2025-05-31 16:04:58.508880 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:04:58.569246 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:04:58.629079 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:04:58.692579 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:04:58.752907 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:04:58.859646 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:04:58.859874 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:04:58.860219 | orchestrator | 2025-05-31 16:04:58.861010 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-05-31 16:04:58.861466 | orchestrator | Saturday 31 May 2025 16:04:58 +0000 (0:00:00.474) 0:06:35.780 ********** 2025-05-31 16:04:58.984656 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:04:59.043709 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:04:59.111734 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:04:59.171949 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:04:59.231818 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:04:59.345331 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:04:59.345694 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:04:59.346622 | orchestrator | 2025-05-31 16:04:59.347651 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-05-31 16:04:59.347951 | orchestrator | Saturday 31 May 2025 16:04:59 +0000 (0:00:00.489) 0:06:36.269 ********** 2025-05-31 16:05:05.996334 | orchestrator | ok: [testbed-manager] 2025-05-31 16:05:05.996451 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:05:05.997295 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:05:05.998123 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:05:05.999004 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:05:06.000828 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:05:06.001198 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:05:06.001972 | orchestrator | 2025-05-31 16:05:06.003052 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-05-31 16:05:06.003319 | orchestrator | Saturday 31 May 2025 16:05:05 +0000 (0:00:06.647) 0:06:42.917 ********** 2025-05-31 16:05:06.795847 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:05:06.796559 | orchestrator | 2025-05-31 16:05:06.798241 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-05-31 16:05:06.799697 | orchestrator | Saturday 31 May 2025 16:05:06 +0000 (0:00:00.799) 0:06:43.717 ********** 2025-05-31 16:05:07.614642 | orchestrator | ok: [testbed-manager] 2025-05-31 16:05:07.614801 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:05:07.615604 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:05:07.616238 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:05:07.616928 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:05:07.617947 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:05:07.618837 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:05:07.619354 | orchestrator | 2025-05-31 16:05:07.619916 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-05-31 16:05:07.620455 | orchestrator | Saturday 31 May 2025 16:05:07 +0000 (0:00:00.818) 0:06:44.535 ********** 2025-05-31 16:05:08.080435 | orchestrator | ok: [testbed-manager] 2025-05-31 16:05:08.154865 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:05:08.609867 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:05:08.610426 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:05:08.611561 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:05:08.612411 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:05:08.615029 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:05:08.615055 | orchestrator | 2025-05-31 16:05:08.615069 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-05-31 16:05:08.615082 | orchestrator | Saturday 31 May 2025 16:05:08 +0000 (0:00:00.998) 0:06:45.534 ********** 2025-05-31 16:05:10.070580 | orchestrator | ok: [testbed-manager] 2025-05-31 16:05:10.070687 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:05:10.070825 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:05:10.071285 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:05:10.072293 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:05:10.072429 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:05:10.073288 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:05:10.073988 | orchestrator | 2025-05-31 16:05:10.074551 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-05-31 16:05:10.075163 | orchestrator | Saturday 31 May 2025 16:05:10 +0000 (0:00:01.458) 0:06:46.992 ********** 2025-05-31 16:05:10.203945 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:05:11.426602 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:05:11.427235 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:05:11.428163 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:05:11.430384 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:05:11.430943 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:05:11.431667 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:05:11.432240 | orchestrator | 2025-05-31 16:05:11.432999 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-05-31 16:05:11.433652 | orchestrator | Saturday 31 May 2025 16:05:11 +0000 (0:00:01.355) 0:06:48.348 ********** 2025-05-31 16:05:12.753248 | orchestrator | ok: [testbed-manager] 2025-05-31 16:05:12.754098 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:05:12.754141 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:05:12.754184 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:05:12.754197 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:05:12.754208 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:05:12.754274 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:05:12.755217 | orchestrator | 2025-05-31 16:05:12.755441 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-05-31 16:05:12.756191 | orchestrator | Saturday 31 May 2025 16:05:12 +0000 (0:00:01.328) 0:06:49.676 ********** 2025-05-31 16:05:14.138627 | orchestrator | changed: [testbed-manager] 2025-05-31 16:05:14.139617 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:05:14.141522 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:05:14.142582 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:05:14.146123 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:05:14.146179 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:05:14.146198 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:05:14.146210 | orchestrator | 2025-05-31 16:05:14.146223 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-05-31 16:05:14.146506 | orchestrator | Saturday 31 May 2025 16:05:14 +0000 (0:00:01.382) 0:06:51.059 ********** 2025-05-31 16:05:15.130512 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:05:15.131589 | orchestrator | 2025-05-31 16:05:15.132317 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-05-31 16:05:15.133888 | orchestrator | Saturday 31 May 2025 16:05:15 +0000 (0:00:00.993) 0:06:52.053 ********** 2025-05-31 16:05:16.471909 | orchestrator | ok: [testbed-manager] 2025-05-31 16:05:16.475588 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:05:16.475645 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:05:16.475658 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:05:16.475669 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:05:16.475680 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:05:16.475691 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:05:16.476150 | orchestrator | 2025-05-31 16:05:16.477102 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-05-31 16:05:16.477825 | orchestrator | Saturday 31 May 2025 16:05:16 +0000 (0:00:01.339) 0:06:53.393 ********** 2025-05-31 16:05:17.588884 | orchestrator | ok: [testbed-manager] 2025-05-31 16:05:17.589730 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:05:17.589750 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:05:17.590481 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:05:17.593966 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:05:17.593979 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:05:17.593986 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:05:17.593993 | orchestrator | 2025-05-31 16:05:17.594732 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-05-31 16:05:17.595417 | orchestrator | Saturday 31 May 2025 16:05:17 +0000 (0:00:01.115) 0:06:54.508 ********** 2025-05-31 16:05:18.762700 | orchestrator | ok: [testbed-manager] 2025-05-31 16:05:18.763844 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:05:18.764197 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:05:18.765973 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:05:18.767288 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:05:18.767464 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:05:18.768403 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:05:18.769742 | orchestrator | 2025-05-31 16:05:18.770117 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-05-31 16:05:18.771300 | orchestrator | Saturday 31 May 2025 16:05:18 +0000 (0:00:01.177) 0:06:55.686 ********** 2025-05-31 16:05:20.033226 | orchestrator | ok: [testbed-manager] 2025-05-31 16:05:20.034258 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:05:20.034614 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:05:20.036277 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:05:20.036336 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:05:20.037643 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:05:20.037747 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:05:20.038002 | orchestrator | 2025-05-31 16:05:20.038814 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-05-31 16:05:20.039296 | orchestrator | Saturday 31 May 2025 16:05:20 +0000 (0:00:01.267) 0:06:56.953 ********** 2025-05-31 16:05:21.320623 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:05:21.321723 | orchestrator | 2025-05-31 16:05:21.322522 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-31 16:05:21.323523 | orchestrator | Saturday 31 May 2025 16:05:20 +0000 (0:00:00.969) 0:06:57.923 ********** 2025-05-31 16:05:21.325365 | orchestrator | 2025-05-31 16:05:21.325940 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-31 16:05:21.327207 | orchestrator | Saturday 31 May 2025 16:05:21 +0000 (0:00:00.058) 0:06:57.981 ********** 2025-05-31 16:05:21.328386 | orchestrator | 2025-05-31 16:05:21.329131 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-31 16:05:21.330302 | orchestrator | Saturday 31 May 2025 16:05:21 +0000 (0:00:00.039) 0:06:58.020 ********** 2025-05-31 16:05:21.330691 | orchestrator | 2025-05-31 16:05:21.331161 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-31 16:05:21.332089 | orchestrator | Saturday 31 May 2025 16:05:21 +0000 (0:00:00.040) 0:06:58.060 ********** 2025-05-31 16:05:21.332567 | orchestrator | 2025-05-31 16:05:21.333929 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-31 16:05:21.334504 | orchestrator | Saturday 31 May 2025 16:05:21 +0000 (0:00:00.047) 0:06:58.107 ********** 2025-05-31 16:05:21.335505 | orchestrator | 2025-05-31 16:05:21.336007 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-31 16:05:21.336538 | orchestrator | Saturday 31 May 2025 16:05:21 +0000 (0:00:00.038) 0:06:58.146 ********** 2025-05-31 16:05:21.337125 | orchestrator | 2025-05-31 16:05:21.337511 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-31 16:05:21.337892 | orchestrator | Saturday 31 May 2025 16:05:21 +0000 (0:00:00.050) 0:06:58.196 ********** 2025-05-31 16:05:21.338842 | orchestrator | 2025-05-31 16:05:21.339613 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-31 16:05:21.340109 | orchestrator | Saturday 31 May 2025 16:05:21 +0000 (0:00:00.045) 0:06:58.242 ********** 2025-05-31 16:05:22.410227 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:05:22.410322 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:05:22.410334 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:05:22.410903 | orchestrator | 2025-05-31 16:05:22.411984 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-05-31 16:05:22.412294 | orchestrator | Saturday 31 May 2025 16:05:22 +0000 (0:00:01.088) 0:06:59.331 ********** 2025-05-31 16:05:23.905065 | orchestrator | changed: [testbed-manager] 2025-05-31 16:05:23.905276 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:05:23.906258 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:05:23.908827 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:05:23.909640 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:05:23.910377 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:05:23.911161 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:05:23.911882 | orchestrator | 2025-05-31 16:05:23.912417 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-05-31 16:05:23.913155 | orchestrator | Saturday 31 May 2025 16:05:23 +0000 (0:00:01.495) 0:07:00.826 ********** 2025-05-31 16:05:25.056209 | orchestrator | changed: [testbed-manager] 2025-05-31 16:05:25.056587 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:05:25.057240 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:05:25.057935 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:05:25.058865 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:05:25.060577 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:05:25.060602 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:05:25.061146 | orchestrator | 2025-05-31 16:05:25.061713 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-05-31 16:05:25.062331 | orchestrator | Saturday 31 May 2025 16:05:25 +0000 (0:00:01.151) 0:07:01.977 ********** 2025-05-31 16:05:25.182596 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:05:27.061193 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:05:27.061303 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:05:27.062937 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:05:27.063672 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:05:27.064604 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:05:27.064948 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:05:27.065532 | orchestrator | 2025-05-31 16:05:27.066302 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-05-31 16:05:27.066581 | orchestrator | Saturday 31 May 2025 16:05:27 +0000 (0:00:02.002) 0:07:03.980 ********** 2025-05-31 16:05:27.163529 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:05:27.165009 | orchestrator | 2025-05-31 16:05:27.165449 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-05-31 16:05:27.166887 | orchestrator | Saturday 31 May 2025 16:05:27 +0000 (0:00:00.104) 0:07:04.084 ********** 2025-05-31 16:05:28.180303 | orchestrator | ok: [testbed-manager] 2025-05-31 16:05:28.180861 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:05:28.182106 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:05:28.182627 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:05:28.183272 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:05:28.184468 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:05:28.184908 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:05:28.185410 | orchestrator | 2025-05-31 16:05:28.186099 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-05-31 16:05:28.186533 | orchestrator | Saturday 31 May 2025 16:05:28 +0000 (0:00:01.018) 0:07:05.103 ********** 2025-05-31 16:05:28.308057 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:05:28.378228 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:05:28.438865 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:05:28.498239 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:05:28.720678 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:05:28.846409 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:05:28.847203 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:05:28.848412 | orchestrator | 2025-05-31 16:05:28.849553 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-05-31 16:05:28.850073 | orchestrator | Saturday 31 May 2025 16:05:28 +0000 (0:00:00.665) 0:07:05.769 ********** 2025-05-31 16:05:29.706852 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:05:29.707830 | orchestrator | 2025-05-31 16:05:29.708407 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-05-31 16:05:29.709012 | orchestrator | Saturday 31 May 2025 16:05:29 +0000 (0:00:00.858) 0:07:06.627 ********** 2025-05-31 16:05:30.134609 | orchestrator | ok: [testbed-manager] 2025-05-31 16:05:30.550719 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:05:30.551068 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:05:30.551752 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:05:30.552238 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:05:30.552798 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:05:30.553438 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:05:30.553932 | orchestrator | 2025-05-31 16:05:30.554477 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-05-31 16:05:30.554697 | orchestrator | Saturday 31 May 2025 16:05:30 +0000 (0:00:00.843) 0:07:07.471 ********** 2025-05-31 16:05:33.134245 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-05-31 16:05:33.134529 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-05-31 16:05:33.134552 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-05-31 16:05:33.136236 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-05-31 16:05:33.137219 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-05-31 16:05:33.138157 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-05-31 16:05:33.138835 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-05-31 16:05:33.139303 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-05-31 16:05:33.139922 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-05-31 16:05:33.140478 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-05-31 16:05:33.141490 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-05-31 16:05:33.141857 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-05-31 16:05:33.142890 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-05-31 16:05:33.143196 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-05-31 16:05:33.143440 | orchestrator | 2025-05-31 16:05:33.143973 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-05-31 16:05:33.144415 | orchestrator | Saturday 31 May 2025 16:05:33 +0000 (0:00:02.583) 0:07:10.055 ********** 2025-05-31 16:05:33.253149 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:05:33.318351 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:05:33.378451 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:05:33.437826 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:05:33.504347 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:05:33.607592 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:05:33.608243 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:05:33.609364 | orchestrator | 2025-05-31 16:05:33.609922 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-05-31 16:05:33.610982 | orchestrator | Saturday 31 May 2025 16:05:33 +0000 (0:00:00.473) 0:07:10.529 ********** 2025-05-31 16:05:34.394979 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:05:34.395458 | orchestrator | 2025-05-31 16:05:34.396534 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-05-31 16:05:34.397260 | orchestrator | Saturday 31 May 2025 16:05:34 +0000 (0:00:00.786) 0:07:11.315 ********** 2025-05-31 16:05:35.221739 | orchestrator | ok: [testbed-manager] 2025-05-31 16:05:35.222389 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:05:35.223238 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:05:35.225164 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:05:35.226171 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:05:35.227912 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:05:35.228554 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:05:35.230316 | orchestrator | 2025-05-31 16:05:35.230343 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-05-31 16:05:35.230357 | orchestrator | Saturday 31 May 2025 16:05:35 +0000 (0:00:00.826) 0:07:12.142 ********** 2025-05-31 16:05:35.671259 | orchestrator | ok: [testbed-manager] 2025-05-31 16:05:35.743968 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:05:36.279745 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:05:36.280338 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:05:36.281257 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:05:36.282133 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:05:36.282888 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:05:36.284024 | orchestrator | 2025-05-31 16:05:36.284200 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-05-31 16:05:36.284893 | orchestrator | Saturday 31 May 2025 16:05:36 +0000 (0:00:01.058) 0:07:13.201 ********** 2025-05-31 16:05:36.405375 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:05:36.463539 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:05:36.527504 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:05:36.597935 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:05:36.655659 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:05:36.741292 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:05:36.741485 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:05:36.742458 | orchestrator | 2025-05-31 16:05:36.743564 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-05-31 16:05:36.745417 | orchestrator | Saturday 31 May 2025 16:05:36 +0000 (0:00:00.464) 0:07:13.665 ********** 2025-05-31 16:05:38.257906 | orchestrator | ok: [testbed-manager] 2025-05-31 16:05:38.258127 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:05:38.258217 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:05:38.258878 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:05:38.259082 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:05:38.259445 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:05:38.259677 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:05:38.260029 | orchestrator | 2025-05-31 16:05:38.262474 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-05-31 16:05:38.262503 | orchestrator | Saturday 31 May 2025 16:05:38 +0000 (0:00:01.513) 0:07:15.179 ********** 2025-05-31 16:05:38.397366 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:05:38.458953 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:05:38.518329 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:05:38.581515 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:05:38.642980 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:05:38.735111 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:05:38.735216 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:05:38.735237 | orchestrator | 2025-05-31 16:05:38.735258 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-05-31 16:05:38.735407 | orchestrator | Saturday 31 May 2025 16:05:38 +0000 (0:00:00.475) 0:07:15.655 ********** 2025-05-31 16:05:40.837872 | orchestrator | ok: [testbed-manager] 2025-05-31 16:05:40.839401 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:05:40.840199 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:05:40.841963 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:05:40.843589 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:05:40.844579 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:05:40.845418 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:05:40.846356 | orchestrator | 2025-05-31 16:05:40.847283 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-05-31 16:05:40.847386 | orchestrator | Saturday 31 May 2025 16:05:40 +0000 (0:00:02.103) 0:07:17.759 ********** 2025-05-31 16:05:42.142123 | orchestrator | ok: [testbed-manager] 2025-05-31 16:05:42.142548 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:05:42.143229 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:05:42.144949 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:05:42.145187 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:05:42.145520 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:05:42.145805 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:05:42.146423 | orchestrator | 2025-05-31 16:05:42.147056 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-05-31 16:05:42.147641 | orchestrator | Saturday 31 May 2025 16:05:42 +0000 (0:00:01.303) 0:07:19.062 ********** 2025-05-31 16:05:43.803870 | orchestrator | ok: [testbed-manager] 2025-05-31 16:05:43.804865 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:05:43.805621 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:05:43.806479 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:05:43.807227 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:05:43.808000 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:05:43.809040 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:05:43.809610 | orchestrator | 2025-05-31 16:05:43.810480 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-05-31 16:05:43.811205 | orchestrator | Saturday 31 May 2025 16:05:43 +0000 (0:00:01.661) 0:07:20.724 ********** 2025-05-31 16:05:45.444142 | orchestrator | ok: [testbed-manager] 2025-05-31 16:05:45.444744 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:05:45.448354 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:05:45.448395 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:05:45.448408 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:05:45.448419 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:05:45.449762 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:05:45.452464 | orchestrator | 2025-05-31 16:05:45.453559 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-31 16:05:45.454132 | orchestrator | Saturday 31 May 2025 16:05:45 +0000 (0:00:01.641) 0:07:22.366 ********** 2025-05-31 16:05:46.054867 | orchestrator | ok: [testbed-manager] 2025-05-31 16:05:46.466992 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:05:46.467200 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:05:46.468169 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:05:46.468670 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:05:46.471280 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:05:46.471317 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:05:46.471333 | orchestrator | 2025-05-31 16:05:46.472714 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-31 16:05:46.472916 | orchestrator | Saturday 31 May 2025 16:05:46 +0000 (0:00:01.023) 0:07:23.389 ********** 2025-05-31 16:05:46.594526 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:05:46.652235 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:05:46.727866 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:05:46.789601 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:05:46.848094 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:05:47.231715 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:05:47.231936 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:05:47.231954 | orchestrator | 2025-05-31 16:05:47.232385 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-05-31 16:05:47.232566 | orchestrator | Saturday 31 May 2025 16:05:47 +0000 (0:00:00.766) 0:07:24.156 ********** 2025-05-31 16:05:47.364563 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:05:47.424889 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:05:47.488942 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:05:47.558300 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:05:47.619414 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:05:47.715545 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:05:47.716976 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:05:47.717624 | orchestrator | 2025-05-31 16:05:47.718350 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-05-31 16:05:47.718571 | orchestrator | Saturday 31 May 2025 16:05:47 +0000 (0:00:00.482) 0:07:24.638 ********** 2025-05-31 16:05:47.846879 | orchestrator | ok: [testbed-manager] 2025-05-31 16:05:47.908377 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:05:47.979253 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:05:48.037076 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:05:48.097889 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:05:48.204963 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:05:48.205150 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:05:48.206271 | orchestrator | 2025-05-31 16:05:48.210204 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-05-31 16:05:48.210524 | orchestrator | Saturday 31 May 2025 16:05:48 +0000 (0:00:00.488) 0:07:25.127 ********** 2025-05-31 16:05:48.326252 | orchestrator | ok: [testbed-manager] 2025-05-31 16:05:48.392230 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:05:48.612769 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:05:48.673920 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:05:48.734581 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:05:48.848549 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:05:48.848713 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:05:48.849595 | orchestrator | 2025-05-31 16:05:48.850897 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-05-31 16:05:48.851251 | orchestrator | Saturday 31 May 2025 16:05:48 +0000 (0:00:00.644) 0:07:25.771 ********** 2025-05-31 16:05:48.976464 | orchestrator | ok: [testbed-manager] 2025-05-31 16:05:49.049181 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:05:49.118521 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:05:49.179315 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:05:49.244840 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:05:49.343354 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:05:49.343469 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:05:49.344444 | orchestrator | 2025-05-31 16:05:49.345420 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-05-31 16:05:49.345985 | orchestrator | Saturday 31 May 2025 16:05:49 +0000 (0:00:00.493) 0:07:26.264 ********** 2025-05-31 16:05:55.259711 | orchestrator | ok: [testbed-manager] 2025-05-31 16:05:55.259905 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:05:55.260562 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:05:55.261271 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:05:55.261733 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:05:55.262585 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:05:55.263143 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:05:55.263835 | orchestrator | 2025-05-31 16:05:55.264315 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-05-31 16:05:55.264888 | orchestrator | Saturday 31 May 2025 16:05:55 +0000 (0:00:05.917) 0:07:32.181 ********** 2025-05-31 16:05:55.425654 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:05:55.488226 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:05:55.561268 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:05:55.617134 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:05:55.677720 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:05:55.803369 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:05:55.804214 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:05:55.804772 | orchestrator | 2025-05-31 16:05:55.805409 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-05-31 16:05:55.805953 | orchestrator | Saturday 31 May 2025 16:05:55 +0000 (0:00:00.545) 0:07:32.726 ********** 2025-05-31 16:05:56.750473 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:05:56.750660 | orchestrator | 2025-05-31 16:05:56.754832 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-05-31 16:05:56.755364 | orchestrator | Saturday 31 May 2025 16:05:56 +0000 (0:00:00.944) 0:07:33.671 ********** 2025-05-31 16:05:58.690771 | orchestrator | ok: [testbed-manager] 2025-05-31 16:05:58.693426 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:05:58.696370 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:05:58.696523 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:05:58.698084 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:05:58.698621 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:05:58.699718 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:05:58.700227 | orchestrator | 2025-05-31 16:05:58.700701 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-05-31 16:05:58.701142 | orchestrator | Saturday 31 May 2025 16:05:58 +0000 (0:00:01.942) 0:07:35.613 ********** 2025-05-31 16:05:59.860272 | orchestrator | ok: [testbed-manager] 2025-05-31 16:05:59.860578 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:05:59.862113 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:05:59.862260 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:05:59.862656 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:05:59.863312 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:05:59.864755 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:05:59.865128 | orchestrator | 2025-05-31 16:05:59.865897 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-05-31 16:05:59.868517 | orchestrator | Saturday 31 May 2025 16:05:59 +0000 (0:00:01.167) 0:07:36.780 ********** 2025-05-31 16:06:00.262530 | orchestrator | ok: [testbed-manager] 2025-05-31 16:06:00.671626 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:06:00.673489 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:06:00.674205 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:06:00.675430 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:06:00.676477 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:06:00.676912 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:06:00.677960 | orchestrator | 2025-05-31 16:06:00.679274 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-05-31 16:06:00.680128 | orchestrator | Saturday 31 May 2025 16:06:00 +0000 (0:00:00.814) 0:07:37.595 ********** 2025-05-31 16:06:02.515207 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-31 16:06:02.516238 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-31 16:06:02.517133 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-31 16:06:02.517569 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-31 16:06:02.518203 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-31 16:06:02.519029 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-31 16:06:02.519883 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-31 16:06:02.521039 | orchestrator | 2025-05-31 16:06:02.521308 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-05-31 16:06:02.522098 | orchestrator | Saturday 31 May 2025 16:06:02 +0000 (0:00:01.842) 0:07:39.438 ********** 2025-05-31 16:06:03.278852 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:06:03.279360 | orchestrator | 2025-05-31 16:06:03.280304 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-05-31 16:06:03.284145 | orchestrator | Saturday 31 May 2025 16:06:03 +0000 (0:00:00.763) 0:07:40.202 ********** 2025-05-31 16:06:12.318924 | orchestrator | changed: [testbed-manager] 2025-05-31 16:06:12.319086 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:06:12.319736 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:06:12.319974 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:06:12.320925 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:06:12.322502 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:06:12.323103 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:06:12.323542 | orchestrator | 2025-05-31 16:06:12.324257 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-05-31 16:06:12.324733 | orchestrator | Saturday 31 May 2025 16:06:12 +0000 (0:00:09.036) 0:07:49.238 ********** 2025-05-31 16:06:14.210271 | orchestrator | ok: [testbed-manager] 2025-05-31 16:06:14.211708 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:06:14.213757 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:06:14.214178 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:06:14.214556 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:06:14.215941 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:06:14.216585 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:06:14.217713 | orchestrator | 2025-05-31 16:06:14.218440 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-05-31 16:06:14.219373 | orchestrator | Saturday 31 May 2025 16:06:14 +0000 (0:00:01.892) 0:07:51.130 ********** 2025-05-31 16:06:15.514425 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:06:15.514877 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:06:15.515722 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:06:15.516333 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:06:15.518095 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:06:15.518546 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:06:15.519339 | orchestrator | 2025-05-31 16:06:15.521421 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-05-31 16:06:15.522072 | orchestrator | Saturday 31 May 2025 16:06:15 +0000 (0:00:01.307) 0:07:52.438 ********** 2025-05-31 16:06:16.867434 | orchestrator | changed: [testbed-manager] 2025-05-31 16:06:16.867951 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:06:16.871949 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:06:16.872438 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:06:16.873643 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:06:16.874764 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:06:16.875627 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:06:16.878706 | orchestrator | 2025-05-31 16:06:16.879247 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-05-31 16:06:16.880065 | orchestrator | 2025-05-31 16:06:16.883455 | orchestrator | TASK [Include hardening role] ************************************************** 2025-05-31 16:06:16.883970 | orchestrator | Saturday 31 May 2025 16:06:16 +0000 (0:00:01.352) 0:07:53.790 ********** 2025-05-31 16:06:16.984434 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:06:17.047858 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:06:17.112382 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:06:17.163675 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:06:17.227385 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:06:17.332919 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:06:17.333713 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:06:17.335033 | orchestrator | 2025-05-31 16:06:17.338148 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-05-31 16:06:17.338175 | orchestrator | 2025-05-31 16:06:17.338188 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-05-31 16:06:17.338199 | orchestrator | Saturday 31 May 2025 16:06:17 +0000 (0:00:00.464) 0:07:54.255 ********** 2025-05-31 16:06:18.597961 | orchestrator | changed: [testbed-manager] 2025-05-31 16:06:18.598356 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:06:18.599482 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:06:18.600103 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:06:18.604503 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:06:18.604630 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:06:18.605177 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:06:18.605797 | orchestrator | 2025-05-31 16:06:18.606188 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-05-31 16:06:18.606762 | orchestrator | Saturday 31 May 2025 16:06:18 +0000 (0:00:01.264) 0:07:55.519 ********** 2025-05-31 16:06:20.022379 | orchestrator | ok: [testbed-manager] 2025-05-31 16:06:20.022556 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:06:20.023293 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:06:20.024242 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:06:20.026186 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:06:20.026212 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:06:20.026268 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:06:20.027222 | orchestrator | 2025-05-31 16:06:20.029378 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-05-31 16:06:20.029925 | orchestrator | Saturday 31 May 2025 16:06:20 +0000 (0:00:01.423) 0:07:56.943 ********** 2025-05-31 16:06:20.165301 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:06:20.222730 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:06:20.290706 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:06:20.504737 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:06:20.567336 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:06:20.950108 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:06:20.950794 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:06:20.952254 | orchestrator | 2025-05-31 16:06:20.952297 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-05-31 16:06:20.953197 | orchestrator | Saturday 31 May 2025 16:06:20 +0000 (0:00:00.927) 0:07:57.871 ********** 2025-05-31 16:06:22.170109 | orchestrator | changed: [testbed-manager] 2025-05-31 16:06:22.170286 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:06:22.171329 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:06:22.172199 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:06:22.172401 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:06:22.172983 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:06:22.173448 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:06:22.174592 | orchestrator | 2025-05-31 16:06:22.174915 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-05-31 16:06:22.175129 | orchestrator | 2025-05-31 16:06:22.175404 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-05-31 16:06:22.175744 | orchestrator | Saturday 31 May 2025 16:06:22 +0000 (0:00:01.222) 0:07:59.093 ********** 2025-05-31 16:06:22.931026 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:06:22.931495 | orchestrator | 2025-05-31 16:06:22.931927 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-31 16:06:22.933389 | orchestrator | Saturday 31 May 2025 16:06:22 +0000 (0:00:00.758) 0:07:59.851 ********** 2025-05-31 16:06:23.313137 | orchestrator | ok: [testbed-manager] 2025-05-31 16:06:23.449151 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:06:23.940677 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:06:23.940976 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:06:23.941835 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:06:23.942444 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:06:23.942936 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:06:23.943594 | orchestrator | 2025-05-31 16:06:23.944150 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-31 16:06:23.944637 | orchestrator | Saturday 31 May 2025 16:06:23 +0000 (0:00:01.008) 0:08:00.859 ********** 2025-05-31 16:06:25.072531 | orchestrator | changed: [testbed-manager] 2025-05-31 16:06:25.072750 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:06:25.075126 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:06:25.075160 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:06:25.075172 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:06:25.075183 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:06:25.077070 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:06:25.077370 | orchestrator | 2025-05-31 16:06:25.078281 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-05-31 16:06:25.079001 | orchestrator | Saturday 31 May 2025 16:06:25 +0000 (0:00:01.134) 0:08:01.994 ********** 2025-05-31 16:06:25.977618 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:06:25.977716 | orchestrator | 2025-05-31 16:06:25.980598 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-31 16:06:25.980628 | orchestrator | Saturday 31 May 2025 16:06:25 +0000 (0:00:00.904) 0:08:02.898 ********** 2025-05-31 16:06:26.806922 | orchestrator | ok: [testbed-manager] 2025-05-31 16:06:26.807032 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:06:26.807052 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:06:26.808171 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:06:26.808836 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:06:26.810147 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:06:26.810631 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:06:26.811422 | orchestrator | 2025-05-31 16:06:26.812416 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-31 16:06:26.813033 | orchestrator | Saturday 31 May 2025 16:06:26 +0000 (0:00:00.828) 0:08:03.726 ********** 2025-05-31 16:06:27.243932 | orchestrator | changed: [testbed-manager] 2025-05-31 16:06:27.929603 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:06:27.929734 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:06:27.930793 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:06:27.931679 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:06:27.932327 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:06:27.933429 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:06:27.934264 | orchestrator | 2025-05-31 16:06:27.935313 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:06:27.938014 | orchestrator | 2025-05-31 16:06:27 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-31 16:06:27.938112 | orchestrator | 2025-05-31 16:06:27 | INFO  | Please wait and do not abort execution. 2025-05-31 16:06:27.939199 | orchestrator | testbed-manager : ok=160  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-05-31 16:06:27.939411 | orchestrator | testbed-node-0 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-31 16:06:27.939880 | orchestrator | testbed-node-1 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-31 16:06:27.940693 | orchestrator | testbed-node-2 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-31 16:06:27.941015 | orchestrator | testbed-node-3 : ok=167  changed=62  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-05-31 16:06:27.941922 | orchestrator | testbed-node-4 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-31 16:06:27.942840 | orchestrator | testbed-node-5 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-31 16:06:27.942985 | orchestrator | 2025-05-31 16:06:27.943006 | orchestrator | Saturday 31 May 2025 16:06:27 +0000 (0:00:01.124) 0:08:04.851 ********** 2025-05-31 16:06:27.943607 | orchestrator | =============================================================================== 2025-05-31 16:06:27.943976 | orchestrator | osism.commons.packages : Install required packages --------------------- 83.88s 2025-05-31 16:06:27.944770 | orchestrator | osism.commons.packages : Download required packages -------------------- 36.87s 2025-05-31 16:06:27.945097 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.73s 2025-05-31 16:06:27.946110 | orchestrator | osism.commons.repository : Update package cache ------------------------ 14.05s 2025-05-31 16:06:27.946200 | orchestrator | osism.services.docker : Install docker package ------------------------- 12.81s 2025-05-31 16:06:27.946627 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 12.64s 2025-05-31 16:06:27.946981 | orchestrator | osism.services.docker : Install apt-transport-https package ------------ 11.94s 2025-05-31 16:06:27.947518 | orchestrator | osism.commons.packages : Upgrade packages ------------------------------ 11.06s 2025-05-31 16:06:27.947871 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 10.77s 2025-05-31 16:06:27.948217 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 10.43s 2025-05-31 16:06:27.948683 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.63s 2025-05-31 16:06:27.949063 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.04s 2025-05-31 16:06:27.949439 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.26s 2025-05-31 16:06:27.949856 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.10s 2025-05-31 16:06:27.950467 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.73s 2025-05-31 16:06:27.950779 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.68s 2025-05-31 16:06:27.951210 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.48s 2025-05-31 16:06:27.951557 | orchestrator | osism.services.docker : Ensure that some packages are not installed ----- 6.65s 2025-05-31 16:06:27.951854 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.93s 2025-05-31 16:06:27.952182 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.92s 2025-05-31 16:06:28.479669 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-05-31 16:06:28.479770 | orchestrator | + osism apply network 2025-05-31 16:06:30.293080 | orchestrator | 2025-05-31 16:06:30 | INFO  | Task d29e9989-e8ce-43b5-80d8-9bdd1df1ab9f (network) was prepared for execution. 2025-05-31 16:06:30.293319 | orchestrator | 2025-05-31 16:06:30 | INFO  | It takes a moment until task d29e9989-e8ce-43b5-80d8-9bdd1df1ab9f (network) has been started and output is visible here. 2025-05-31 16:06:33.461134 | orchestrator | 2025-05-31 16:06:33.461248 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-05-31 16:06:33.462704 | orchestrator | 2025-05-31 16:06:33.465999 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-05-31 16:06:33.466305 | orchestrator | Saturday 31 May 2025 16:06:33 +0000 (0:00:00.202) 0:00:00.202 ********** 2025-05-31 16:06:33.606802 | orchestrator | ok: [testbed-manager] 2025-05-31 16:06:33.681137 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:06:33.755109 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:06:33.829922 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:06:33.903860 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:06:34.128806 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:06:34.128962 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:06:34.129743 | orchestrator | 2025-05-31 16:06:34.130455 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-05-31 16:06:34.133611 | orchestrator | Saturday 31 May 2025 16:06:34 +0000 (0:00:00.667) 0:00:00.870 ********** 2025-05-31 16:06:35.272246 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:06:35.272529 | orchestrator | 2025-05-31 16:06:35.273290 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-05-31 16:06:35.276635 | orchestrator | Saturday 31 May 2025 16:06:35 +0000 (0:00:01.142) 0:00:02.012 ********** 2025-05-31 16:06:37.164066 | orchestrator | ok: [testbed-manager] 2025-05-31 16:06:37.164798 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:06:37.165575 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:06:37.169346 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:06:37.170529 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:06:37.170937 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:06:37.171481 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:06:37.172033 | orchestrator | 2025-05-31 16:06:37.172493 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-05-31 16:06:37.172981 | orchestrator | Saturday 31 May 2025 16:06:37 +0000 (0:00:01.892) 0:00:03.904 ********** 2025-05-31 16:06:38.826491 | orchestrator | ok: [testbed-manager] 2025-05-31 16:06:38.829004 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:06:38.829898 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:06:38.830810 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:06:38.831364 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:06:38.832149 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:06:38.833948 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:06:38.834178 | orchestrator | 2025-05-31 16:06:38.834911 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-05-31 16:06:38.835053 | orchestrator | Saturday 31 May 2025 16:06:38 +0000 (0:00:01.660) 0:00:05.564 ********** 2025-05-31 16:06:39.309430 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-05-31 16:06:39.309933 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-05-31 16:06:39.871127 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-05-31 16:06:39.871682 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-05-31 16:06:39.871989 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-05-31 16:06:39.873004 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-05-31 16:06:39.876504 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-05-31 16:06:39.876586 | orchestrator | 2025-05-31 16:06:39.876605 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-05-31 16:06:39.876618 | orchestrator | Saturday 31 May 2025 16:06:39 +0000 (0:00:01.048) 0:00:06.613 ********** 2025-05-31 16:06:41.668900 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-31 16:06:41.672004 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-31 16:06:41.672302 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-31 16:06:41.675182 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-31 16:06:41.675800 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-31 16:06:41.676419 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-31 16:06:41.676927 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-31 16:06:41.677465 | orchestrator | 2025-05-31 16:06:41.677975 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-05-31 16:06:41.678418 | orchestrator | Saturday 31 May 2025 16:06:41 +0000 (0:00:01.798) 0:00:08.411 ********** 2025-05-31 16:06:43.262119 | orchestrator | changed: [testbed-manager] 2025-05-31 16:06:43.262347 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:06:43.263442 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:06:43.263535 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:06:43.264681 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:06:43.265821 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:06:43.266703 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:06:43.267038 | orchestrator | 2025-05-31 16:06:43.268450 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-05-31 16:06:43.268482 | orchestrator | Saturday 31 May 2025 16:06:43 +0000 (0:00:01.589) 0:00:10.000 ********** 2025-05-31 16:06:43.708642 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-31 16:06:43.788965 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-31 16:06:44.225360 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-31 16:06:44.225468 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-31 16:06:44.226085 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-31 16:06:44.227328 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-31 16:06:44.227773 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-31 16:06:44.228216 | orchestrator | 2025-05-31 16:06:44.229035 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-05-31 16:06:44.229209 | orchestrator | Saturday 31 May 2025 16:06:44 +0000 (0:00:00.969) 0:00:10.970 ********** 2025-05-31 16:06:44.644927 | orchestrator | ok: [testbed-manager] 2025-05-31 16:06:44.729657 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:06:45.326401 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:06:45.327227 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:06:45.327245 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:06:45.329553 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:06:45.330353 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:06:45.330602 | orchestrator | 2025-05-31 16:06:45.331180 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-05-31 16:06:45.331993 | orchestrator | Saturday 31 May 2025 16:06:45 +0000 (0:00:01.096) 0:00:12.066 ********** 2025-05-31 16:06:45.485578 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:06:45.571248 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:06:45.650418 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:06:45.727611 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:06:45.804940 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:06:46.074266 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:06:46.074464 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:06:46.075297 | orchestrator | 2025-05-31 16:06:46.076018 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-05-31 16:06:46.076524 | orchestrator | Saturday 31 May 2025 16:06:46 +0000 (0:00:00.750) 0:00:12.816 ********** 2025-05-31 16:06:47.928642 | orchestrator | ok: [testbed-manager] 2025-05-31 16:06:47.928745 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:06:47.931505 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:06:47.931923 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:06:47.933485 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:06:47.934478 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:06:47.935640 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:06:47.936292 | orchestrator | 2025-05-31 16:06:47.937068 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-05-31 16:06:47.938222 | orchestrator | Saturday 31 May 2025 16:06:47 +0000 (0:00:01.853) 0:00:14.670 ********** 2025-05-31 16:06:48.701442 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-05-31 16:06:49.798472 | orchestrator | changed: [testbed-node-0] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-31 16:06:49.799162 | orchestrator | changed: [testbed-node-1] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-31 16:06:49.800983 | orchestrator | changed: [testbed-node-2] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-31 16:06:49.801735 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-31 16:06:49.802485 | orchestrator | changed: [testbed-node-3] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-31 16:06:49.803204 | orchestrator | changed: [testbed-node-4] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-31 16:06:49.803690 | orchestrator | changed: [testbed-node-5] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-31 16:06:49.804206 | orchestrator | 2025-05-31 16:06:49.804996 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-05-31 16:06:49.805330 | orchestrator | Saturday 31 May 2025 16:06:49 +0000 (0:00:01.868) 0:00:16.538 ********** 2025-05-31 16:06:51.296267 | orchestrator | ok: [testbed-manager] 2025-05-31 16:06:51.296392 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:06:51.296481 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:06:51.296995 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:06:51.300634 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:06:51.300691 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:06:51.300705 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:06:51.300718 | orchestrator | 2025-05-31 16:06:51.300732 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-05-31 16:06:51.302322 | orchestrator | Saturday 31 May 2025 16:06:51 +0000 (0:00:01.501) 0:00:18.039 ********** 2025-05-31 16:06:52.603794 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:06:52.604037 | orchestrator | 2025-05-31 16:06:52.604446 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-05-31 16:06:52.607485 | orchestrator | Saturday 31 May 2025 16:06:52 +0000 (0:00:01.304) 0:00:19.344 ********** 2025-05-31 16:06:53.113507 | orchestrator | ok: [testbed-manager] 2025-05-31 16:06:53.540318 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:06:53.540522 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:06:53.542304 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:06:53.544523 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:06:53.545771 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:06:53.546211 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:06:53.547137 | orchestrator | 2025-05-31 16:06:53.547892 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-05-31 16:06:53.548320 | orchestrator | Saturday 31 May 2025 16:06:53 +0000 (0:00:00.937) 0:00:20.281 ********** 2025-05-31 16:06:53.695346 | orchestrator | ok: [testbed-manager] 2025-05-31 16:06:53.775649 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:06:53.987272 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:06:54.072215 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:06:54.156353 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:06:54.292432 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:06:54.293415 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:06:54.295782 | orchestrator | 2025-05-31 16:06:54.296283 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-05-31 16:06:54.297725 | orchestrator | Saturday 31 May 2025 16:06:54 +0000 (0:00:00.747) 0:00:21.029 ********** 2025-05-31 16:06:54.727282 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-31 16:06:54.727598 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-05-31 16:06:54.821042 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-31 16:06:54.821203 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-05-31 16:06:55.287106 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-31 16:06:55.288019 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-05-31 16:06:55.288978 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-31 16:06:55.290733 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-05-31 16:06:55.291077 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-31 16:06:55.293340 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-05-31 16:06:55.293915 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-31 16:06:55.294568 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-05-31 16:06:55.295335 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-31 16:06:55.296258 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-05-31 16:06:55.296404 | orchestrator | 2025-05-31 16:06:55.296642 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-05-31 16:06:55.296936 | orchestrator | Saturday 31 May 2025 16:06:55 +0000 (0:00:01.001) 0:00:22.031 ********** 2025-05-31 16:06:55.571551 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:06:55.651651 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:06:55.733388 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:06:55.814831 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:06:55.895030 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:06:56.978407 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:06:56.978548 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:06:56.978632 | orchestrator | 2025-05-31 16:06:56.979366 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-05-31 16:06:56.979686 | orchestrator | Saturday 31 May 2025 16:06:56 +0000 (0:00:01.687) 0:00:23.718 ********** 2025-05-31 16:06:57.132636 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:06:57.212733 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:06:57.444032 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:06:57.528051 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:06:57.611037 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:06:57.646983 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:06:57.647111 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:06:57.649861 | orchestrator | 2025-05-31 16:06:57.651792 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:06:57.651981 | orchestrator | 2025-05-31 16:06:57 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-31 16:06:57.653738 | orchestrator | 2025-05-31 16:06:57 | INFO  | Please wait and do not abort execution. 2025-05-31 16:06:57.655191 | orchestrator | testbed-manager : ok=16  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-31 16:06:57.656293 | orchestrator | testbed-node-0 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-31 16:06:57.657100 | orchestrator | testbed-node-1 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-31 16:06:57.658123 | orchestrator | testbed-node-2 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-31 16:06:57.659893 | orchestrator | testbed-node-3 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-31 16:06:57.664347 | orchestrator | testbed-node-4 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-31 16:06:57.669140 | orchestrator | testbed-node-5 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-31 16:06:57.669246 | orchestrator | 2025-05-31 16:06:57.670098 | orchestrator | Saturday 31 May 2025 16:06:57 +0000 (0:00:00.673) 0:00:24.392 ********** 2025-05-31 16:06:57.671289 | orchestrator | =============================================================================== 2025-05-31 16:06:57.672147 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.89s 2025-05-31 16:06:57.672619 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 1.87s 2025-05-31 16:06:57.672986 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 1.85s 2025-05-31 16:06:57.673527 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 1.80s 2025-05-31 16:06:57.674326 | orchestrator | osism.commons.network : Include dummy interfaces ------------------------ 1.69s 2025-05-31 16:06:57.675512 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.66s 2025-05-31 16:06:57.677456 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.59s 2025-05-31 16:06:57.677482 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.50s 2025-05-31 16:06:57.677494 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.30s 2025-05-31 16:06:57.677505 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.14s 2025-05-31 16:06:57.677577 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.10s 2025-05-31 16:06:57.677947 | orchestrator | osism.commons.network : Create required directories --------------------- 1.05s 2025-05-31 16:06:57.678511 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.00s 2025-05-31 16:06:57.679513 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 0.97s 2025-05-31 16:06:57.679895 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.94s 2025-05-31 16:06:57.680443 | orchestrator | osism.commons.network : Copy interfaces file ---------------------------- 0.75s 2025-05-31 16:06:57.680991 | orchestrator | osism.commons.network : Set network_configured_files fact --------------- 0.75s 2025-05-31 16:06:57.681231 | orchestrator | osism.commons.network : Netplan configuration changed ------------------- 0.67s 2025-05-31 16:06:57.681686 | orchestrator | osism.commons.network : Gather variables for each operating system ------ 0.67s 2025-05-31 16:06:58.089952 | orchestrator | + osism apply wireguard 2025-05-31 16:06:59.464054 | orchestrator | 2025-05-31 16:06:59 | INFO  | Task c63899be-5c53-46ce-8919-445fa00729a2 (wireguard) was prepared for execution. 2025-05-31 16:06:59.464171 | orchestrator | 2025-05-31 16:06:59 | INFO  | It takes a moment until task c63899be-5c53-46ce-8919-445fa00729a2 (wireguard) has been started and output is visible here. 2025-05-31 16:07:02.396390 | orchestrator | 2025-05-31 16:07:02.396958 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-05-31 16:07:02.398445 | orchestrator | 2025-05-31 16:07:02.398882 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-05-31 16:07:02.399442 | orchestrator | Saturday 31 May 2025 16:07:02 +0000 (0:00:00.154) 0:00:00.154 ********** 2025-05-31 16:07:03.772331 | orchestrator | ok: [testbed-manager] 2025-05-31 16:07:03.772644 | orchestrator | 2025-05-31 16:07:03.772999 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-05-31 16:07:03.774330 | orchestrator | Saturday 31 May 2025 16:07:03 +0000 (0:00:01.379) 0:00:01.534 ********** 2025-05-31 16:07:09.770075 | orchestrator | changed: [testbed-manager] 2025-05-31 16:07:09.770261 | orchestrator | 2025-05-31 16:07:09.771541 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-05-31 16:07:09.772564 | orchestrator | Saturday 31 May 2025 16:07:09 +0000 (0:00:05.998) 0:00:07.532 ********** 2025-05-31 16:07:10.308379 | orchestrator | changed: [testbed-manager] 2025-05-31 16:07:10.308510 | orchestrator | 2025-05-31 16:07:10.308597 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-05-31 16:07:10.310153 | orchestrator | Saturday 31 May 2025 16:07:10 +0000 (0:00:00.540) 0:00:08.072 ********** 2025-05-31 16:07:10.687564 | orchestrator | changed: [testbed-manager] 2025-05-31 16:07:10.687737 | orchestrator | 2025-05-31 16:07:10.688086 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-05-31 16:07:10.688554 | orchestrator | Saturday 31 May 2025 16:07:10 +0000 (0:00:00.379) 0:00:08.452 ********** 2025-05-31 16:07:11.188701 | orchestrator | ok: [testbed-manager] 2025-05-31 16:07:11.188986 | orchestrator | 2025-05-31 16:07:11.189224 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-05-31 16:07:11.189348 | orchestrator | Saturday 31 May 2025 16:07:11 +0000 (0:00:00.499) 0:00:08.952 ********** 2025-05-31 16:07:11.679077 | orchestrator | ok: [testbed-manager] 2025-05-31 16:07:11.679469 | orchestrator | 2025-05-31 16:07:11.680400 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-05-31 16:07:11.681319 | orchestrator | Saturday 31 May 2025 16:07:11 +0000 (0:00:00.490) 0:00:09.443 ********** 2025-05-31 16:07:12.092128 | orchestrator | ok: [testbed-manager] 2025-05-31 16:07:12.093512 | orchestrator | 2025-05-31 16:07:12.093556 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-05-31 16:07:12.094334 | orchestrator | Saturday 31 May 2025 16:07:12 +0000 (0:00:00.411) 0:00:09.854 ********** 2025-05-31 16:07:13.270544 | orchestrator | changed: [testbed-manager] 2025-05-31 16:07:13.270712 | orchestrator | 2025-05-31 16:07:13.271005 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-05-31 16:07:13.272374 | orchestrator | Saturday 31 May 2025 16:07:13 +0000 (0:00:01.176) 0:00:11.031 ********** 2025-05-31 16:07:14.144822 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-31 16:07:14.145058 | orchestrator | changed: [testbed-manager] 2025-05-31 16:07:14.145603 | orchestrator | 2025-05-31 16:07:14.146324 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-05-31 16:07:14.146892 | orchestrator | Saturday 31 May 2025 16:07:14 +0000 (0:00:00.875) 0:00:11.907 ********** 2025-05-31 16:07:15.816547 | orchestrator | changed: [testbed-manager] 2025-05-31 16:07:15.816943 | orchestrator | 2025-05-31 16:07:15.818151 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-05-31 16:07:15.818687 | orchestrator | Saturday 31 May 2025 16:07:15 +0000 (0:00:01.669) 0:00:13.577 ********** 2025-05-31 16:07:16.741960 | orchestrator | changed: [testbed-manager] 2025-05-31 16:07:16.742169 | orchestrator | 2025-05-31 16:07:16.742607 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:07:16.742652 | orchestrator | 2025-05-31 16:07:16 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-31 16:07:16.742669 | orchestrator | 2025-05-31 16:07:16 | INFO  | Please wait and do not abort execution. 2025-05-31 16:07:16.743159 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:07:16.744044 | orchestrator | 2025-05-31 16:07:16.744495 | orchestrator | Saturday 31 May 2025 16:07:16 +0000 (0:00:00.928) 0:00:14.505 ********** 2025-05-31 16:07:16.745444 | orchestrator | =============================================================================== 2025-05-31 16:07:16.745606 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.00s 2025-05-31 16:07:16.746096 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.67s 2025-05-31 16:07:16.746779 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.38s 2025-05-31 16:07:16.747628 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.18s 2025-05-31 16:07:16.748020 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.93s 2025-05-31 16:07:16.748635 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.88s 2025-05-31 16:07:16.749024 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.54s 2025-05-31 16:07:16.749876 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.50s 2025-05-31 16:07:16.750389 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.49s 2025-05-31 16:07:16.750929 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.41s 2025-05-31 16:07:16.751293 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.38s 2025-05-31 16:07:17.339371 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-05-31 16:07:17.377346 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-05-31 16:07:17.377431 | orchestrator | Dload Upload Total Spent Left Speed 2025-05-31 16:07:17.453076 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 184 0 --:--:-- --:--:-- --:--:-- 186 2025-05-31 16:07:17.464254 | orchestrator | + osism apply --environment custom workarounds 2025-05-31 16:07:18.905449 | orchestrator | 2025-05-31 16:07:18 | INFO  | Trying to run play workarounds in environment custom 2025-05-31 16:07:18.957966 | orchestrator | 2025-05-31 16:07:18 | INFO  | Task 12da5e3f-6722-438b-9a1c-7680e6e8400d (workarounds) was prepared for execution. 2025-05-31 16:07:18.958095 | orchestrator | 2025-05-31 16:07:18 | INFO  | It takes a moment until task 12da5e3f-6722-438b-9a1c-7680e6e8400d (workarounds) has been started and output is visible here. 2025-05-31 16:07:21.959283 | orchestrator | 2025-05-31 16:07:21.959471 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-31 16:07:21.961643 | orchestrator | 2025-05-31 16:07:21.961673 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-05-31 16:07:21.961687 | orchestrator | Saturday 31 May 2025 16:07:21 +0000 (0:00:00.138) 0:00:00.138 ********** 2025-05-31 16:07:22.145951 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-05-31 16:07:22.230312 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-05-31 16:07:22.311735 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-05-31 16:07:22.391255 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-05-31 16:07:22.473180 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-05-31 16:07:22.716812 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-05-31 16:07:22.717058 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-05-31 16:07:22.717470 | orchestrator | 2025-05-31 16:07:22.717972 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-05-31 16:07:22.718283 | orchestrator | 2025-05-31 16:07:22.719465 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-31 16:07:22.719486 | orchestrator | Saturday 31 May 2025 16:07:22 +0000 (0:00:00.760) 0:00:00.898 ********** 2025-05-31 16:07:25.214310 | orchestrator | ok: [testbed-manager] 2025-05-31 16:07:25.215157 | orchestrator | 2025-05-31 16:07:25.217257 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-05-31 16:07:25.219848 | orchestrator | 2025-05-31 16:07:25.219924 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-31 16:07:25.219938 | orchestrator | Saturday 31 May 2025 16:07:25 +0000 (0:00:02.492) 0:00:03.391 ********** 2025-05-31 16:07:27.032542 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:07:27.032761 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:07:27.033830 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:07:27.034432 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:07:27.035186 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:07:27.036176 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:07:27.037200 | orchestrator | 2025-05-31 16:07:27.038270 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-05-31 16:07:27.039067 | orchestrator | 2025-05-31 16:07:27.041446 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-05-31 16:07:27.042277 | orchestrator | Saturday 31 May 2025 16:07:27 +0000 (0:00:01.822) 0:00:05.214 ********** 2025-05-31 16:07:28.447626 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-31 16:07:28.447730 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-31 16:07:28.447817 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-31 16:07:28.448267 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-31 16:07:28.449300 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-31 16:07:28.451013 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-31 16:07:28.451324 | orchestrator | 2025-05-31 16:07:28.452419 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-05-31 16:07:28.453039 | orchestrator | Saturday 31 May 2025 16:07:28 +0000 (0:00:01.412) 0:00:06.626 ********** 2025-05-31 16:07:32.157487 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:07:32.159837 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:07:32.159957 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:07:32.161127 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:07:32.162223 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:07:32.162811 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:07:32.163772 | orchestrator | 2025-05-31 16:07:32.164492 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-05-31 16:07:32.165100 | orchestrator | Saturday 31 May 2025 16:07:32 +0000 (0:00:03.713) 0:00:10.339 ********** 2025-05-31 16:07:32.298735 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:07:32.370979 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:07:32.449401 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:07:32.662344 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:07:32.797100 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:07:32.798073 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:07:32.799053 | orchestrator | 2025-05-31 16:07:32.799845 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-05-31 16:07:32.800672 | orchestrator | 2025-05-31 16:07:32.801717 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-05-31 16:07:32.802096 | orchestrator | Saturday 31 May 2025 16:07:32 +0000 (0:00:00.638) 0:00:10.978 ********** 2025-05-31 16:07:34.387843 | orchestrator | changed: [testbed-manager] 2025-05-31 16:07:34.388687 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:07:34.390355 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:07:34.392897 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:07:34.394638 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:07:34.395486 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:07:34.396787 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:07:34.397283 | orchestrator | 2025-05-31 16:07:34.398164 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-05-31 16:07:34.399855 | orchestrator | Saturday 31 May 2025 16:07:34 +0000 (0:00:01.590) 0:00:12.568 ********** 2025-05-31 16:07:35.948086 | orchestrator | changed: [testbed-manager] 2025-05-31 16:07:35.949408 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:07:35.949581 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:07:35.953615 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:07:35.953648 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:07:35.953661 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:07:35.953906 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:07:35.954842 | orchestrator | 2025-05-31 16:07:35.955356 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-05-31 16:07:35.956071 | orchestrator | Saturday 31 May 2025 16:07:35 +0000 (0:00:01.556) 0:00:14.125 ********** 2025-05-31 16:07:37.418786 | orchestrator | ok: [testbed-manager] 2025-05-31 16:07:37.419022 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:07:37.421262 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:07:37.423226 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:07:37.423250 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:07:37.423268 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:07:37.423317 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:07:37.423912 | orchestrator | 2025-05-31 16:07:37.424610 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-05-31 16:07:37.425493 | orchestrator | Saturday 31 May 2025 16:07:37 +0000 (0:00:01.475) 0:00:15.600 ********** 2025-05-31 16:07:39.100949 | orchestrator | changed: [testbed-manager] 2025-05-31 16:07:39.102115 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:07:39.102996 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:07:39.105557 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:07:39.105583 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:07:39.105666 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:07:39.106901 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:07:39.111932 | orchestrator | 2025-05-31 16:07:39.112169 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-05-31 16:07:39.114045 | orchestrator | Saturday 31 May 2025 16:07:39 +0000 (0:00:01.681) 0:00:17.282 ********** 2025-05-31 16:07:39.252266 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:07:39.322952 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:07:39.393440 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:07:39.460328 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:07:39.666098 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:07:39.798225 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:07:39.799117 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:07:39.799963 | orchestrator | 2025-05-31 16:07:39.801402 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-05-31 16:07:39.802114 | orchestrator | 2025-05-31 16:07:39.803934 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-05-31 16:07:39.804262 | orchestrator | Saturday 31 May 2025 16:07:39 +0000 (0:00:00.696) 0:00:17.979 ********** 2025-05-31 16:07:42.377759 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:07:42.378552 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:07:42.378815 | orchestrator | ok: [testbed-manager] 2025-05-31 16:07:42.379952 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:07:42.380677 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:07:42.381636 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:07:42.381708 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:07:42.381843 | orchestrator | 2025-05-31 16:07:42.382941 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:07:42.382987 | orchestrator | 2025-05-31 16:07:42 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-31 16:07:42.383354 | orchestrator | 2025-05-31 16:07:42 | INFO  | Please wait and do not abort execution. 2025-05-31 16:07:42.383420 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-31 16:07:42.384094 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 16:07:42.384517 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 16:07:42.385058 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 16:07:42.385691 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 16:07:42.386173 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 16:07:42.386502 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 16:07:42.386790 | orchestrator | 2025-05-31 16:07:42.387411 | orchestrator | Saturday 31 May 2025 16:07:42 +0000 (0:00:02.581) 0:00:20.560 ********** 2025-05-31 16:07:42.387499 | orchestrator | =============================================================================== 2025-05-31 16:07:42.388632 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.71s 2025-05-31 16:07:42.388922 | orchestrator | Install python3-docker -------------------------------------------------- 2.58s 2025-05-31 16:07:42.388976 | orchestrator | Apply netplan configuration --------------------------------------------- 2.49s 2025-05-31 16:07:42.389445 | orchestrator | Apply netplan configuration --------------------------------------------- 1.82s 2025-05-31 16:07:42.389654 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.68s 2025-05-31 16:07:42.390082 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.59s 2025-05-31 16:07:42.390206 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.56s 2025-05-31 16:07:42.390498 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.48s 2025-05-31 16:07:42.390847 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.41s 2025-05-31 16:07:42.391051 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.76s 2025-05-31 16:07:42.391421 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.70s 2025-05-31 16:07:42.392755 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.64s 2025-05-31 16:07:42.858306 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-05-31 16:07:44.223598 | orchestrator | 2025-05-31 16:07:44 | INFO  | Task 1428d6db-60d7-40ff-b2b0-fed81b394977 (reboot) was prepared for execution. 2025-05-31 16:07:44.223693 | orchestrator | 2025-05-31 16:07:44 | INFO  | It takes a moment until task 1428d6db-60d7-40ff-b2b0-fed81b394977 (reboot) has been started and output is visible here. 2025-05-31 16:07:47.161767 | orchestrator | 2025-05-31 16:07:47.162155 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-31 16:07:47.162602 | orchestrator | 2025-05-31 16:07:47.163325 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-31 16:07:47.165943 | orchestrator | Saturday 31 May 2025 16:07:47 +0000 (0:00:00.140) 0:00:00.140 ********** 2025-05-31 16:07:47.253839 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:07:47.254012 | orchestrator | 2025-05-31 16:07:47.254439 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-31 16:07:47.255195 | orchestrator | Saturday 31 May 2025 16:07:47 +0000 (0:00:00.095) 0:00:00.235 ********** 2025-05-31 16:07:48.128965 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:07:48.129085 | orchestrator | 2025-05-31 16:07:48.129166 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-31 16:07:48.129786 | orchestrator | Saturday 31 May 2025 16:07:48 +0000 (0:00:00.873) 0:00:01.109 ********** 2025-05-31 16:07:48.238660 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:07:48.239074 | orchestrator | 2025-05-31 16:07:48.239747 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-31 16:07:48.240292 | orchestrator | 2025-05-31 16:07:48.240987 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-31 16:07:48.241420 | orchestrator | Saturday 31 May 2025 16:07:48 +0000 (0:00:00.107) 0:00:01.217 ********** 2025-05-31 16:07:48.328086 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:07:48.328180 | orchestrator | 2025-05-31 16:07:48.328195 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-31 16:07:48.328299 | orchestrator | Saturday 31 May 2025 16:07:48 +0000 (0:00:00.090) 0:00:01.308 ********** 2025-05-31 16:07:48.969953 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:07:48.970257 | orchestrator | 2025-05-31 16:07:48.970498 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-31 16:07:48.971174 | orchestrator | Saturday 31 May 2025 16:07:48 +0000 (0:00:00.642) 0:00:01.950 ********** 2025-05-31 16:07:49.077063 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:07:49.077509 | orchestrator | 2025-05-31 16:07:49.078406 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-31 16:07:49.078856 | orchestrator | 2025-05-31 16:07:49.079641 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-31 16:07:49.080332 | orchestrator | Saturday 31 May 2025 16:07:49 +0000 (0:00:00.106) 0:00:02.056 ********** 2025-05-31 16:07:49.169016 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:07:49.169346 | orchestrator | 2025-05-31 16:07:49.170432 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-31 16:07:49.170944 | orchestrator | Saturday 31 May 2025 16:07:49 +0000 (0:00:00.093) 0:00:02.150 ********** 2025-05-31 16:07:49.900193 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:07:49.900731 | orchestrator | 2025-05-31 16:07:49.901643 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-31 16:07:49.903052 | orchestrator | Saturday 31 May 2025 16:07:49 +0000 (0:00:00.730) 0:00:02.880 ********** 2025-05-31 16:07:50.000810 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:07:50.001846 | orchestrator | 2025-05-31 16:07:50.002841 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-31 16:07:50.004108 | orchestrator | 2025-05-31 16:07:50.004757 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-31 16:07:50.005379 | orchestrator | Saturday 31 May 2025 16:07:49 +0000 (0:00:00.099) 0:00:02.980 ********** 2025-05-31 16:07:50.102349 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:07:50.102660 | orchestrator | 2025-05-31 16:07:50.104134 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-31 16:07:50.104881 | orchestrator | Saturday 31 May 2025 16:07:50 +0000 (0:00:00.102) 0:00:03.082 ********** 2025-05-31 16:07:50.762441 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:07:50.763535 | orchestrator | 2025-05-31 16:07:50.763996 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-31 16:07:50.764981 | orchestrator | Saturday 31 May 2025 16:07:50 +0000 (0:00:00.660) 0:00:03.742 ********** 2025-05-31 16:07:50.879132 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:07:50.880559 | orchestrator | 2025-05-31 16:07:50.880607 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-31 16:07:50.881592 | orchestrator | 2025-05-31 16:07:50.882211 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-31 16:07:50.883133 | orchestrator | Saturday 31 May 2025 16:07:50 +0000 (0:00:00.114) 0:00:03.857 ********** 2025-05-31 16:07:50.981465 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:07:50.982121 | orchestrator | 2025-05-31 16:07:50.983352 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-31 16:07:50.984365 | orchestrator | Saturday 31 May 2025 16:07:50 +0000 (0:00:00.104) 0:00:03.962 ********** 2025-05-31 16:07:51.621863 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:07:51.622140 | orchestrator | 2025-05-31 16:07:51.623640 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-31 16:07:51.623661 | orchestrator | Saturday 31 May 2025 16:07:51 +0000 (0:00:00.639) 0:00:04.602 ********** 2025-05-31 16:07:51.743597 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:07:51.744175 | orchestrator | 2025-05-31 16:07:51.745523 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-31 16:07:51.746662 | orchestrator | 2025-05-31 16:07:51.747211 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-31 16:07:51.748339 | orchestrator | Saturday 31 May 2025 16:07:51 +0000 (0:00:00.118) 0:00:04.720 ********** 2025-05-31 16:07:51.835778 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:07:51.836579 | orchestrator | 2025-05-31 16:07:51.837683 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-31 16:07:51.838486 | orchestrator | Saturday 31 May 2025 16:07:51 +0000 (0:00:00.095) 0:00:04.816 ********** 2025-05-31 16:07:52.486534 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:07:52.486639 | orchestrator | 2025-05-31 16:07:52.489604 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-31 16:07:52.489628 | orchestrator | Saturday 31 May 2025 16:07:52 +0000 (0:00:00.646) 0:00:05.462 ********** 2025-05-31 16:07:52.511256 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:07:52.511286 | orchestrator | 2025-05-31 16:07:52.511336 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:07:52.511601 | orchestrator | 2025-05-31 16:07:52 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-31 16:07:52.511622 | orchestrator | 2025-05-31 16:07:52 | INFO  | Please wait and do not abort execution. 2025-05-31 16:07:52.511862 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 16:07:52.512339 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 16:07:52.512458 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 16:07:52.512773 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 16:07:52.513071 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 16:07:52.513263 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 16:07:52.513489 | orchestrator | 2025-05-31 16:07:52.513749 | orchestrator | Saturday 31 May 2025 16:07:52 +0000 (0:00:00.029) 0:00:05.492 ********** 2025-05-31 16:07:52.514160 | orchestrator | =============================================================================== 2025-05-31 16:07:52.514451 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.19s 2025-05-31 16:07:52.514569 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.58s 2025-05-31 16:07:52.515595 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.58s 2025-05-31 16:07:52.954516 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-05-31 16:07:54.343265 | orchestrator | 2025-05-31 16:07:54 | INFO  | Task c9b2b754-6f8a-4b51-9dc6-abbb2a1d9fb6 (wait-for-connection) was prepared for execution. 2025-05-31 16:07:54.343376 | orchestrator | 2025-05-31 16:07:54 | INFO  | It takes a moment until task c9b2b754-6f8a-4b51-9dc6-abbb2a1d9fb6 (wait-for-connection) has been started and output is visible here. 2025-05-31 16:07:57.363836 | orchestrator | 2025-05-31 16:07:57.364374 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-05-31 16:07:57.365229 | orchestrator | 2025-05-31 16:07:57.365933 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-05-31 16:07:57.367772 | orchestrator | Saturday 31 May 2025 16:07:57 +0000 (0:00:00.164) 0:00:00.164 ********** 2025-05-31 16:08:10.598129 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:08:10.598243 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:08:10.598256 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:08:10.598265 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:08:10.598274 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:08:10.598306 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:08:10.598316 | orchestrator | 2025-05-31 16:08:10.598327 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:08:10.598338 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:08:10.598381 | orchestrator | 2025-05-31 16:08:10 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-31 16:08:10.598392 | orchestrator | 2025-05-31 16:08:10 | INFO  | Please wait and do not abort execution. 2025-05-31 16:08:10.598454 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:08:10.602687 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:08:10.602738 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:08:10.602749 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:08:10.602772 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:08:10.602782 | orchestrator | 2025-05-31 16:08:10.602802 | orchestrator | Saturday 31 May 2025 16:08:10 +0000 (0:00:13.230) 0:00:13.394 ********** 2025-05-31 16:08:10.602812 | orchestrator | =============================================================================== 2025-05-31 16:08:10.602821 | orchestrator | Wait until remote system is reachable ---------------------------------- 13.23s 2025-05-31 16:08:11.022322 | orchestrator | + osism apply hddtemp 2025-05-31 16:08:12.403432 | orchestrator | 2025-05-31 16:08:12 | INFO  | Task 28e205d8-a9ec-44b8-a15f-13a97afc7cba (hddtemp) was prepared for execution. 2025-05-31 16:08:12.403540 | orchestrator | 2025-05-31 16:08:12 | INFO  | It takes a moment until task 28e205d8-a9ec-44b8-a15f-13a97afc7cba (hddtemp) has been started and output is visible here. 2025-05-31 16:08:15.414187 | orchestrator | 2025-05-31 16:08:15.414360 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-05-31 16:08:15.416394 | orchestrator | 2025-05-31 16:08:15.418332 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-05-31 16:08:15.418892 | orchestrator | Saturday 31 May 2025 16:08:15 +0000 (0:00:00.186) 0:00:00.186 ********** 2025-05-31 16:08:15.551746 | orchestrator | ok: [testbed-manager] 2025-05-31 16:08:15.625188 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:08:15.701721 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:08:15.772793 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:08:15.846726 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:08:16.060187 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:08:16.060585 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:08:16.061173 | orchestrator | 2025-05-31 16:08:16.061641 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-05-31 16:08:16.064610 | orchestrator | Saturday 31 May 2025 16:08:16 +0000 (0:00:00.645) 0:00:00.832 ********** 2025-05-31 16:08:17.173013 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:08:17.174703 | orchestrator | 2025-05-31 16:08:17.174733 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-05-31 16:08:17.175168 | orchestrator | Saturday 31 May 2025 16:08:17 +0000 (0:00:01.111) 0:00:01.944 ********** 2025-05-31 16:08:19.151120 | orchestrator | ok: [testbed-manager] 2025-05-31 16:08:19.152203 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:08:19.153603 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:08:19.154787 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:08:19.158077 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:08:19.160997 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:08:19.162152 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:08:19.163014 | orchestrator | 2025-05-31 16:08:19.164220 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-05-31 16:08:19.165179 | orchestrator | Saturday 31 May 2025 16:08:19 +0000 (0:00:01.980) 0:00:03.925 ********** 2025-05-31 16:08:19.713235 | orchestrator | changed: [testbed-manager] 2025-05-31 16:08:19.796382 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:08:20.241210 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:08:20.242835 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:08:20.244125 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:08:20.246598 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:08:20.247048 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:08:20.249442 | orchestrator | 2025-05-31 16:08:20.252825 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-05-31 16:08:20.252856 | orchestrator | Saturday 31 May 2025 16:08:20 +0000 (0:00:01.087) 0:00:05.013 ********** 2025-05-31 16:08:21.432392 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:08:21.432558 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:08:21.433169 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:08:21.433359 | orchestrator | ok: [testbed-manager] 2025-05-31 16:08:21.434438 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:08:21.439125 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:08:21.439177 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:08:21.439190 | orchestrator | 2025-05-31 16:08:21.439206 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-05-31 16:08:21.439219 | orchestrator | Saturday 31 May 2025 16:08:21 +0000 (0:00:01.191) 0:00:06.204 ********** 2025-05-31 16:08:21.684587 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:08:21.765480 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:08:21.853304 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:08:21.929026 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:08:22.036891 | orchestrator | changed: [testbed-manager] 2025-05-31 16:08:22.037431 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:08:22.038136 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:08:22.038989 | orchestrator | 2025-05-31 16:08:22.039765 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-05-31 16:08:22.040259 | orchestrator | Saturday 31 May 2025 16:08:22 +0000 (0:00:00.606) 0:00:06.811 ********** 2025-05-31 16:08:35.154074 | orchestrator | changed: [testbed-manager] 2025-05-31 16:08:35.154197 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:08:35.156352 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:08:35.158212 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:08:35.158240 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:08:35.158251 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:08:35.158262 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:08:35.160242 | orchestrator | 2025-05-31 16:08:35.162225 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-05-31 16:08:35.162253 | orchestrator | Saturday 31 May 2025 16:08:35 +0000 (0:00:13.110) 0:00:19.921 ********** 2025-05-31 16:08:36.275904 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:08:36.276307 | orchestrator | 2025-05-31 16:08:36.278330 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-05-31 16:08:36.278421 | orchestrator | Saturday 31 May 2025 16:08:36 +0000 (0:00:01.125) 0:00:21.047 ********** 2025-05-31 16:08:38.047585 | orchestrator | changed: [testbed-manager] 2025-05-31 16:08:38.047692 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:08:38.048214 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:08:38.048909 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:08:38.050222 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:08:38.053022 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:08:38.053060 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:08:38.053072 | orchestrator | 2025-05-31 16:08:38.053141 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:08:38.054327 | orchestrator | 2025-05-31 16:08:38 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-31 16:08:38.054424 | orchestrator | 2025-05-31 16:08:38 | INFO  | Please wait and do not abort execution. 2025-05-31 16:08:38.055153 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:08:38.055952 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-31 16:08:38.056454 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-31 16:08:38.056929 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-31 16:08:38.057615 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-31 16:08:38.057971 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-31 16:08:38.058415 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-31 16:08:38.058974 | orchestrator | 2025-05-31 16:08:38.059436 | orchestrator | Saturday 31 May 2025 16:08:38 +0000 (0:00:01.775) 0:00:22.822 ********** 2025-05-31 16:08:38.059805 | orchestrator | =============================================================================== 2025-05-31 16:08:38.060226 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.11s 2025-05-31 16:08:38.060487 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.98s 2025-05-31 16:08:38.061648 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.78s 2025-05-31 16:08:38.061688 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.19s 2025-05-31 16:08:38.061699 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.13s 2025-05-31 16:08:38.061778 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.11s 2025-05-31 16:08:38.062010 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.09s 2025-05-31 16:08:38.062241 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.65s 2025-05-31 16:08:38.062488 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.61s 2025-05-31 16:08:38.611570 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-05-31 16:08:40.003811 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-31 16:08:40.003911 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-31 16:08:40.003946 | orchestrator | + local max_attempts=60 2025-05-31 16:08:40.003969 | orchestrator | + local name=ceph-ansible 2025-05-31 16:08:40.004052 | orchestrator | + local attempt_num=1 2025-05-31 16:08:40.004622 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-31 16:08:40.037544 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-31 16:08:40.037647 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-31 16:08:40.037662 | orchestrator | + local max_attempts=60 2025-05-31 16:08:40.037677 | orchestrator | + local name=kolla-ansible 2025-05-31 16:08:40.037689 | orchestrator | + local attempt_num=1 2025-05-31 16:08:40.037771 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-31 16:08:40.068877 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-31 16:08:40.068935 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-31 16:08:40.068949 | orchestrator | + local max_attempts=60 2025-05-31 16:08:40.068961 | orchestrator | + local name=osism-ansible 2025-05-31 16:08:40.068973 | orchestrator | + local attempt_num=1 2025-05-31 16:08:40.069914 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-31 16:08:40.092426 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-31 16:08:40.092479 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-31 16:08:40.092505 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-31 16:08:40.232358 | orchestrator | ARA in ceph-ansible already disabled. 2025-05-31 16:08:40.389230 | orchestrator | ARA in kolla-ansible already disabled. 2025-05-31 16:08:40.534149 | orchestrator | ARA in osism-ansible already disabled. 2025-05-31 16:08:40.682215 | orchestrator | ARA in osism-kubernetes already disabled. 2025-05-31 16:08:40.683156 | orchestrator | + osism apply gather-facts 2025-05-31 16:08:42.068090 | orchestrator | 2025-05-31 16:08:42 | INFO  | Task 1c0a462b-7220-45e0-bf03-1f876ea68886 (gather-facts) was prepared for execution. 2025-05-31 16:08:42.068211 | orchestrator | 2025-05-31 16:08:42 | INFO  | It takes a moment until task 1c0a462b-7220-45e0-bf03-1f876ea68886 (gather-facts) has been started and output is visible here. 2025-05-31 16:08:45.019094 | orchestrator | 2025-05-31 16:08:45.019291 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-31 16:08:45.020469 | orchestrator | 2025-05-31 16:08:45.021391 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-31 16:08:45.021894 | orchestrator | Saturday 31 May 2025 16:08:45 +0000 (0:00:00.154) 0:00:00.154 ********** 2025-05-31 16:08:49.919504 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:08:49.919902 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:08:49.921593 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:08:49.922303 | orchestrator | ok: [testbed-manager] 2025-05-31 16:08:49.923414 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:08:49.923698 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:08:49.925200 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:08:49.925424 | orchestrator | 2025-05-31 16:08:49.926075 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-31 16:08:49.927170 | orchestrator | 2025-05-31 16:08:49.927582 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-31 16:08:49.928425 | orchestrator | Saturday 31 May 2025 16:08:49 +0000 (0:00:04.904) 0:00:05.059 ********** 2025-05-31 16:08:50.063941 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:08:50.131616 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:08:50.202942 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:08:50.276833 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:08:50.351649 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:08:50.381609 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:08:50.381734 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:08:50.382181 | orchestrator | 2025-05-31 16:08:50.382533 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:08:50.383095 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-31 16:08:50.383223 | orchestrator | 2025-05-31 16:08:50 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-31 16:08:50.383245 | orchestrator | 2025-05-31 16:08:50 | INFO  | Please wait and do not abort execution. 2025-05-31 16:08:50.383406 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-31 16:08:50.383511 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-31 16:08:50.383836 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-31 16:08:50.384057 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-31 16:08:50.384337 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-31 16:08:50.384428 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-31 16:08:50.384738 | orchestrator | 2025-05-31 16:08:50.385090 | orchestrator | Saturday 31 May 2025 16:08:50 +0000 (0:00:00.462) 0:00:05.521 ********** 2025-05-31 16:08:50.385217 | orchestrator | =============================================================================== 2025-05-31 16:08:50.386207 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.90s 2025-05-31 16:08:50.386759 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.46s 2025-05-31 16:08:50.856600 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-05-31 16:08:50.874283 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-05-31 16:08:50.893459 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-05-31 16:08:50.911161 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-05-31 16:08:50.929683 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-05-31 16:08:50.948691 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-05-31 16:08:50.961547 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-05-31 16:08:50.973069 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-05-31 16:08:50.988777 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-05-31 16:08:51.001564 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-05-31 16:08:51.013188 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-05-31 16:08:51.029367 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-05-31 16:08:51.042561 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-05-31 16:08:51.057330 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-05-31 16:08:51.069742 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-05-31 16:08:51.082273 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-05-31 16:08:51.092280 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-05-31 16:08:51.103979 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-05-31 16:08:51.114970 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-05-31 16:08:51.134322 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-05-31 16:08:51.152634 | orchestrator | + [[ false == \t\r\u\e ]] 2025-05-31 16:08:51.574036 | orchestrator | ok: Runtime: 0:25:14.650256 2025-05-31 16:08:51.689464 | 2025-05-31 16:08:51.689631 | TASK [Deploy services] 2025-05-31 16:08:52.225357 | orchestrator | skipping: Conditional result was False 2025-05-31 16:08:52.245595 | 2025-05-31 16:08:52.245882 | TASK [Deploy in a nutshell] 2025-05-31 16:08:52.948107 | orchestrator | + set -e 2025-05-31 16:08:52.948310 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-31 16:08:52.948685 | orchestrator | ++ export INTERACTIVE=false 2025-05-31 16:08:52.948716 | orchestrator | ++ INTERACTIVE=false 2025-05-31 16:08:52.948730 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-31 16:08:52.948742 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-31 16:08:52.948944 | orchestrator | + source /opt/manager-vars.sh 2025-05-31 16:08:52.949326 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-31 16:08:52.949361 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-31 16:08:52.949375 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-31 16:08:52.949391 | orchestrator | ++ CEPH_VERSION=reef 2025-05-31 16:08:52.949403 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-31 16:08:52.949421 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-31 16:08:52.949719 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-05-31 16:08:52.949746 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-05-31 16:08:52.949757 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-31 16:08:52.949772 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-31 16:08:52.949783 | orchestrator | ++ export ARA=false 2025-05-31 16:08:52.949795 | orchestrator | ++ ARA=false 2025-05-31 16:08:52.949806 | orchestrator | ++ export TEMPEST=false 2025-05-31 16:08:52.949821 | orchestrator | ++ TEMPEST=false 2025-05-31 16:08:52.949989 | orchestrator | ++ export IS_ZUUL=true 2025-05-31 16:08:52.950758 | orchestrator | 2025-05-31 16:08:52.950777 | orchestrator | # PULL IMAGES 2025-05-31 16:08:52.950790 | orchestrator | 2025-05-31 16:08:52.950802 | orchestrator | ++ IS_ZUUL=true 2025-05-31 16:08:52.950816 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.95 2025-05-31 16:08:52.950828 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.95 2025-05-31 16:08:52.950841 | orchestrator | ++ export EXTERNAL_API=false 2025-05-31 16:08:52.950854 | orchestrator | ++ EXTERNAL_API=false 2025-05-31 16:08:52.950866 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-31 16:08:52.950878 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-31 16:08:52.950890 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-31 16:08:52.950903 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-31 16:08:52.950915 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-31 16:08:52.950927 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-31 16:08:52.950940 | orchestrator | + echo 2025-05-31 16:08:52.950953 | orchestrator | + echo '# PULL IMAGES' 2025-05-31 16:08:52.950965 | orchestrator | + echo 2025-05-31 16:08:52.952035 | orchestrator | ++ semver 8.1.0 7.0.0 2025-05-31 16:08:52.997366 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-31 16:08:52.997397 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-05-31 16:08:54.327313 | orchestrator | 2025-05-31 16:08:54 | INFO  | Trying to run play pull-images in environment custom 2025-05-31 16:08:54.373873 | orchestrator | 2025-05-31 16:08:54 | INFO  | Task 4e8875aa-974a-4b2b-ae00-b36cff95d89a (pull-images) was prepared for execution. 2025-05-31 16:08:54.373975 | orchestrator | 2025-05-31 16:08:54 | INFO  | It takes a moment until task 4e8875aa-974a-4b2b-ae00-b36cff95d89a (pull-images) has been started and output is visible here. 2025-05-31 16:08:57.361053 | orchestrator | 2025-05-31 16:08:57.362382 | orchestrator | PLAY [Pull images] ************************************************************* 2025-05-31 16:08:57.362974 | orchestrator | 2025-05-31 16:08:57.364636 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-05-31 16:08:57.366339 | orchestrator | Saturday 31 May 2025 16:08:57 +0000 (0:00:00.139) 0:00:00.139 ********** 2025-05-31 16:09:33.380866 | orchestrator | changed: [testbed-manager] 2025-05-31 16:09:33.380991 | orchestrator | 2025-05-31 16:09:33.381009 | orchestrator | TASK [Pull other images] ******************************************************* 2025-05-31 16:09:33.381024 | orchestrator | Saturday 31 May 2025 16:09:33 +0000 (0:00:36.011) 0:00:36.151 ********** 2025-05-31 16:10:17.118444 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-05-31 16:10:17.118591 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-05-31 16:10:17.118607 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-05-31 16:10:17.118619 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-05-31 16:10:17.118630 | orchestrator | changed: [testbed-manager] => (item=common) 2025-05-31 16:10:17.118653 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-05-31 16:10:17.119154 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-05-31 16:10:17.120022 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-05-31 16:10:17.121357 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-05-31 16:10:17.123875 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-05-31 16:10:17.125650 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-05-31 16:10:17.127195 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-05-31 16:10:17.127661 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-05-31 16:10:17.128797 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-05-31 16:10:17.129828 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-05-31 16:10:17.130574 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-05-31 16:10:17.131125 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-05-31 16:10:17.131685 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-05-31 16:10:17.132131 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-05-31 16:10:17.132795 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-05-31 16:10:17.133432 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-05-31 16:10:17.134292 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-05-31 16:10:17.135025 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-05-31 16:10:17.136249 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-05-31 16:10:17.136270 | orchestrator | 2025-05-31 16:10:17.138430 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:10:17.138797 | orchestrator | 2025-05-31 16:10:17 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-31 16:10:17.138827 | orchestrator | 2025-05-31 16:10:17 | INFO  | Please wait and do not abort execution. 2025-05-31 16:10:17.139417 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:10:17.140069 | orchestrator | 2025-05-31 16:10:17.141217 | orchestrator | Saturday 31 May 2025 16:10:17 +0000 (0:00:43.746) 0:01:19.898 ********** 2025-05-31 16:10:17.141243 | orchestrator | =============================================================================== 2025-05-31 16:10:17.141581 | orchestrator | Pull other images ------------------------------------------------------ 43.75s 2025-05-31 16:10:17.142057 | orchestrator | Pull keystone image ---------------------------------------------------- 36.01s 2025-05-31 16:10:19.027699 | orchestrator | 2025-05-31 16:10:19 | INFO  | Trying to run play wipe-partitions in environment custom 2025-05-31 16:10:19.091508 | orchestrator | 2025-05-31 16:10:19 | INFO  | Task 546dd99d-bb0f-4f01-bf50-a489e946c316 (wipe-partitions) was prepared for execution. 2025-05-31 16:10:19.091600 | orchestrator | 2025-05-31 16:10:19 | INFO  | It takes a moment until task 546dd99d-bb0f-4f01-bf50-a489e946c316 (wipe-partitions) has been started and output is visible here. 2025-05-31 16:10:22.107302 | orchestrator | 2025-05-31 16:10:22.107410 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-05-31 16:10:22.107426 | orchestrator | 2025-05-31 16:10:22.107439 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-05-31 16:10:22.107742 | orchestrator | Saturday 31 May 2025 16:10:22 +0000 (0:00:00.141) 0:00:00.141 ********** 2025-05-31 16:10:22.697494 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:10:22.697609 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:10:22.697625 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:10:22.697637 | orchestrator | 2025-05-31 16:10:22.697649 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-05-31 16:10:22.697729 | orchestrator | Saturday 31 May 2025 16:10:22 +0000 (0:00:00.594) 0:00:00.735 ********** 2025-05-31 16:10:22.843313 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:10:22.942695 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:10:22.942792 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:10:22.942904 | orchestrator | 2025-05-31 16:10:22.942947 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-05-31 16:10:22.943008 | orchestrator | Saturday 31 May 2025 16:10:22 +0000 (0:00:00.246) 0:00:00.981 ********** 2025-05-31 16:10:23.587684 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:10:23.589024 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:10:23.589138 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:10:23.589951 | orchestrator | 2025-05-31 16:10:23.589988 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-05-31 16:10:23.590259 | orchestrator | Saturday 31 May 2025 16:10:23 +0000 (0:00:00.643) 0:00:01.625 ********** 2025-05-31 16:10:23.717951 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:10:23.816427 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:10:23.817713 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:10:23.818609 | orchestrator | 2025-05-31 16:10:23.819559 | orchestrator | TASK [Check device availability] *********************************************** 2025-05-31 16:10:23.820436 | orchestrator | Saturday 31 May 2025 16:10:23 +0000 (0:00:00.228) 0:00:01.854 ********** 2025-05-31 16:10:24.999150 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-31 16:10:24.999237 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-31 16:10:24.999250 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-31 16:10:24.999262 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-31 16:10:25.001722 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-31 16:10:25.002850 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-31 16:10:25.002875 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-31 16:10:25.003131 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-31 16:10:25.003352 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-31 16:10:25.003601 | orchestrator | 2025-05-31 16:10:25.003809 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-05-31 16:10:25.004055 | orchestrator | Saturday 31 May 2025 16:10:24 +0000 (0:00:01.182) 0:00:03.036 ********** 2025-05-31 16:10:26.326939 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-05-31 16:10:26.327034 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-05-31 16:10:26.327195 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-05-31 16:10:26.327216 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-05-31 16:10:26.327500 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-05-31 16:10:26.327591 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-05-31 16:10:26.328809 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-05-31 16:10:26.329135 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-05-31 16:10:26.329259 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-05-31 16:10:26.330158 | orchestrator | 2025-05-31 16:10:26.330258 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-05-31 16:10:26.333650 | orchestrator | Saturday 31 May 2025 16:10:26 +0000 (0:00:01.329) 0:00:04.366 ********** 2025-05-31 16:10:29.197555 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-31 16:10:29.200048 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-31 16:10:29.200746 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-31 16:10:29.201247 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-31 16:10:29.202651 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-31 16:10:29.205691 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-31 16:10:29.206641 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-31 16:10:29.207270 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-31 16:10:29.207706 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-31 16:10:29.208784 | orchestrator | 2025-05-31 16:10:29.209082 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-05-31 16:10:29.209780 | orchestrator | Saturday 31 May 2025 16:10:29 +0000 (0:00:02.865) 0:00:07.231 ********** 2025-05-31 16:10:29.766495 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:10:29.766692 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:10:29.766979 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:10:29.767522 | orchestrator | 2025-05-31 16:10:29.767814 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-05-31 16:10:29.768150 | orchestrator | Saturday 31 May 2025 16:10:29 +0000 (0:00:00.574) 0:00:07.805 ********** 2025-05-31 16:10:30.376520 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:10:30.377704 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:10:30.377993 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:10:30.378581 | orchestrator | 2025-05-31 16:10:30.378923 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:10:30.379161 | orchestrator | 2025-05-31 16:10:30 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-31 16:10:30.379370 | orchestrator | 2025-05-31 16:10:30 | INFO  | Please wait and do not abort execution. 2025-05-31 16:10:30.379880 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 16:10:30.380307 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 16:10:30.380871 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 16:10:30.381384 | orchestrator | 2025-05-31 16:10:30.382918 | orchestrator | Saturday 31 May 2025 16:10:30 +0000 (0:00:00.609) 0:00:08.415 ********** 2025-05-31 16:10:30.382942 | orchestrator | =============================================================================== 2025-05-31 16:10:30.382955 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.87s 2025-05-31 16:10:30.383374 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.33s 2025-05-31 16:10:30.384069 | orchestrator | Check device availability ----------------------------------------------- 1.18s 2025-05-31 16:10:30.384359 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.64s 2025-05-31 16:10:30.384932 | orchestrator | Request device events from the kernel ----------------------------------- 0.61s 2025-05-31 16:10:30.385359 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.59s 2025-05-31 16:10:30.385812 | orchestrator | Reload udev rules ------------------------------------------------------- 0.57s 2025-05-31 16:10:30.386435 | orchestrator | Remove all rook related logical devices --------------------------------- 0.25s 2025-05-31 16:10:30.386805 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.23s 2025-05-31 16:10:31.962535 | orchestrator | 2025-05-31 16:10:31 | INFO  | Task 870b6552-026a-4f6c-9db6-a377bd4df1c5 (facts) was prepared for execution. 2025-05-31 16:10:31.962736 | orchestrator | 2025-05-31 16:10:31 | INFO  | It takes a moment until task 870b6552-026a-4f6c-9db6-a377bd4df1c5 (facts) has been started and output is visible here. 2025-05-31 16:10:34.951712 | orchestrator | 2025-05-31 16:10:34.951802 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-31 16:10:34.951815 | orchestrator | 2025-05-31 16:10:34.951823 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-31 16:10:34.952914 | orchestrator | Saturday 31 May 2025 16:10:34 +0000 (0:00:00.194) 0:00:00.194 ********** 2025-05-31 16:10:35.961088 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:10:35.961214 | orchestrator | ok: [testbed-manager] 2025-05-31 16:10:35.962307 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:10:35.962766 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:10:35.963971 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:10:35.964813 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:10:35.965020 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:10:35.965807 | orchestrator | 2025-05-31 16:10:35.966076 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-31 16:10:35.966543 | orchestrator | Saturday 31 May 2025 16:10:35 +0000 (0:00:01.013) 0:00:01.207 ********** 2025-05-31 16:10:36.116385 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:10:36.191815 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:10:36.266225 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:10:36.342590 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:10:36.415621 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:10:37.102070 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:10:37.102301 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:10:37.102831 | orchestrator | 2025-05-31 16:10:37.103447 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-31 16:10:37.104288 | orchestrator | 2025-05-31 16:10:37.107806 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-31 16:10:37.107831 | orchestrator | Saturday 31 May 2025 16:10:37 +0000 (0:00:01.146) 0:00:02.354 ********** 2025-05-31 16:10:41.696239 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:10:41.697571 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:10:41.698468 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:10:41.699830 | orchestrator | ok: [testbed-manager] 2025-05-31 16:10:41.700588 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:10:41.701689 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:10:41.702708 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:10:41.703887 | orchestrator | 2025-05-31 16:10:41.703909 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-31 16:10:41.705041 | orchestrator | 2025-05-31 16:10:41.705731 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-31 16:10:41.706425 | orchestrator | Saturday 31 May 2025 16:10:41 +0000 (0:00:04.592) 0:00:06.947 ********** 2025-05-31 16:10:41.970347 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:10:42.038282 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:10:42.111446 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:10:42.184182 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:10:42.255296 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:10:42.292859 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:10:42.293272 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:10:42.295414 | orchestrator | 2025-05-31 16:10:42.296815 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:10:42.299796 | orchestrator | 2025-05-31 16:10:42 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-31 16:10:42.299826 | orchestrator | 2025-05-31 16:10:42 | INFO  | Please wait and do not abort execution. 2025-05-31 16:10:42.300364 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 16:10:42.301667 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 16:10:42.302299 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 16:10:42.303620 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 16:10:42.304335 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 16:10:42.305217 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 16:10:42.305847 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 16:10:42.307022 | orchestrator | 2025-05-31 16:10:42.307718 | orchestrator | Saturday 31 May 2025 16:10:42 +0000 (0:00:00.595) 0:00:07.542 ********** 2025-05-31 16:10:42.308648 | orchestrator | =============================================================================== 2025-05-31 16:10:42.308999 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.59s 2025-05-31 16:10:42.309776 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.15s 2025-05-31 16:10:42.310404 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.01s 2025-05-31 16:10:42.310907 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.60s 2025-05-31 16:10:44.217610 | orchestrator | 2025-05-31 16:10:44 | INFO  | Task 3255b1b1-2b40-454c-a3a4-155569c1ceed (ceph-configure-lvm-volumes) was prepared for execution. 2025-05-31 16:10:44.219804 | orchestrator | 2025-05-31 16:10:44 | INFO  | It takes a moment until task 3255b1b1-2b40-454c-a3a4-155569c1ceed (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-05-31 16:10:48.433202 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-31 16:10:48.932633 | orchestrator | 2025-05-31 16:10:48.932705 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-31 16:10:48.932717 | orchestrator | 2025-05-31 16:10:48.932727 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-31 16:10:48.933693 | orchestrator | Saturday 31 May 2025 16:10:48 +0000 (0:00:00.424) 0:00:00.424 ********** 2025-05-31 16:10:49.159226 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-31 16:10:49.159323 | orchestrator | 2025-05-31 16:10:49.161293 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-31 16:10:49.161374 | orchestrator | Saturday 31 May 2025 16:10:49 +0000 (0:00:00.225) 0:00:00.650 ********** 2025-05-31 16:10:49.363014 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:10:49.363244 | orchestrator | 2025-05-31 16:10:49.363333 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:10:49.363351 | orchestrator | Saturday 31 May 2025 16:10:49 +0000 (0:00:00.206) 0:00:00.857 ********** 2025-05-31 16:10:49.844583 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-31 16:10:49.844758 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-31 16:10:49.844843 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-31 16:10:49.846290 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-31 16:10:49.847767 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-31 16:10:49.848808 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-31 16:10:49.850073 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-31 16:10:49.851750 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-31 16:10:49.852188 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-31 16:10:49.853416 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-31 16:10:49.854258 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-31 16:10:49.855647 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-31 16:10:49.856409 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-31 16:10:49.858091 | orchestrator | 2025-05-31 16:10:49.858947 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:10:49.859822 | orchestrator | Saturday 31 May 2025 16:10:49 +0000 (0:00:00.475) 0:00:01.333 ********** 2025-05-31 16:10:50.031493 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:10:50.033385 | orchestrator | 2025-05-31 16:10:50.034340 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:10:50.041130 | orchestrator | Saturday 31 May 2025 16:10:50 +0000 (0:00:00.193) 0:00:01.526 ********** 2025-05-31 16:10:50.233679 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:10:50.233794 | orchestrator | 2025-05-31 16:10:50.233809 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:10:50.233905 | orchestrator | Saturday 31 May 2025 16:10:50 +0000 (0:00:00.203) 0:00:01.730 ********** 2025-05-31 16:10:50.438305 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:10:50.439856 | orchestrator | 2025-05-31 16:10:50.439920 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:10:50.440122 | orchestrator | Saturday 31 May 2025 16:10:50 +0000 (0:00:00.203) 0:00:01.933 ********** 2025-05-31 16:10:50.652488 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:10:50.652604 | orchestrator | 2025-05-31 16:10:50.652971 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:10:50.653003 | orchestrator | Saturday 31 May 2025 16:10:50 +0000 (0:00:00.214) 0:00:02.147 ********** 2025-05-31 16:10:50.854320 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:10:50.855858 | orchestrator | 2025-05-31 16:10:50.856203 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:10:50.856677 | orchestrator | Saturday 31 May 2025 16:10:50 +0000 (0:00:00.201) 0:00:02.349 ********** 2025-05-31 16:10:51.057075 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:10:51.057808 | orchestrator | 2025-05-31 16:10:51.057979 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:10:51.058271 | orchestrator | Saturday 31 May 2025 16:10:51 +0000 (0:00:00.202) 0:00:02.551 ********** 2025-05-31 16:10:51.263483 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:10:51.263585 | orchestrator | 2025-05-31 16:10:51.265034 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:10:51.266754 | orchestrator | Saturday 31 May 2025 16:10:51 +0000 (0:00:00.204) 0:00:02.755 ********** 2025-05-31 16:10:51.448240 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:10:51.448347 | orchestrator | 2025-05-31 16:10:51.448363 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:10:51.451309 | orchestrator | Saturday 31 May 2025 16:10:51 +0000 (0:00:00.186) 0:00:02.941 ********** 2025-05-31 16:10:52.077508 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_934f4ed8-ee22-4056-bf57-9bb86fd501b4) 2025-05-31 16:10:52.077639 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_934f4ed8-ee22-4056-bf57-9bb86fd501b4) 2025-05-31 16:10:52.077721 | orchestrator | 2025-05-31 16:10:52.077738 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:10:52.077951 | orchestrator | Saturday 31 May 2025 16:10:52 +0000 (0:00:00.627) 0:00:03.569 ********** 2025-05-31 16:10:52.816092 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_dcaed5f8-6e74-4dcd-98b8-10fdfb9502ae) 2025-05-31 16:10:52.816848 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_dcaed5f8-6e74-4dcd-98b8-10fdfb9502ae) 2025-05-31 16:10:52.816878 | orchestrator | 2025-05-31 16:10:52.817307 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:10:52.817582 | orchestrator | Saturday 31 May 2025 16:10:52 +0000 (0:00:00.742) 0:00:04.312 ********** 2025-05-31 16:10:53.194930 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_35c478e1-8eef-4047-84ea-c6dce0624e72) 2025-05-31 16:10:53.196718 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_35c478e1-8eef-4047-84ea-c6dce0624e72) 2025-05-31 16:10:53.196920 | orchestrator | 2025-05-31 16:10:53.199665 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:10:53.199694 | orchestrator | Saturday 31 May 2025 16:10:53 +0000 (0:00:00.378) 0:00:04.691 ********** 2025-05-31 16:10:53.580976 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_433a7fb9-b2f1-41fe-ae71-e8a6b0aff1da) 2025-05-31 16:10:53.581821 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_433a7fb9-b2f1-41fe-ae71-e8a6b0aff1da) 2025-05-31 16:10:53.582087 | orchestrator | 2025-05-31 16:10:53.582322 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:10:53.584391 | orchestrator | Saturday 31 May 2025 16:10:53 +0000 (0:00:00.386) 0:00:05.077 ********** 2025-05-31 16:10:53.861395 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-31 16:10:53.861476 | orchestrator | 2025-05-31 16:10:53.861535 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:10:53.861605 | orchestrator | Saturday 31 May 2025 16:10:53 +0000 (0:00:00.279) 0:00:05.357 ********** 2025-05-31 16:10:54.251621 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-31 16:10:54.252328 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-31 16:10:54.252376 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-31 16:10:54.252388 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-31 16:10:54.253868 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-31 16:10:54.254010 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-31 16:10:54.254081 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-31 16:10:54.255590 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-31 16:10:54.255615 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-31 16:10:54.255626 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-31 16:10:54.255637 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-31 16:10:54.256384 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-31 16:10:54.257011 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-31 16:10:54.257032 | orchestrator | 2025-05-31 16:10:54.257044 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:10:54.257055 | orchestrator | Saturday 31 May 2025 16:10:54 +0000 (0:00:00.390) 0:00:05.748 ********** 2025-05-31 16:10:54.443617 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:10:54.443698 | orchestrator | 2025-05-31 16:10:54.443712 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:10:54.444754 | orchestrator | Saturday 31 May 2025 16:10:54 +0000 (0:00:00.189) 0:00:05.937 ********** 2025-05-31 16:10:54.622832 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:10:54.622932 | orchestrator | 2025-05-31 16:10:54.622954 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:10:54.624721 | orchestrator | Saturday 31 May 2025 16:10:54 +0000 (0:00:00.182) 0:00:06.119 ********** 2025-05-31 16:10:54.820887 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:10:54.823745 | orchestrator | 2025-05-31 16:10:54.823794 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:10:54.823808 | orchestrator | Saturday 31 May 2025 16:10:54 +0000 (0:00:00.197) 0:00:06.317 ********** 2025-05-31 16:10:54.963953 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:10:54.964052 | orchestrator | 2025-05-31 16:10:54.964252 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:10:54.964332 | orchestrator | Saturday 31 May 2025 16:10:54 +0000 (0:00:00.143) 0:00:06.460 ********** 2025-05-31 16:10:55.113038 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:10:55.113147 | orchestrator | 2025-05-31 16:10:55.113286 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:10:55.113366 | orchestrator | Saturday 31 May 2025 16:10:55 +0000 (0:00:00.147) 0:00:06.608 ********** 2025-05-31 16:10:55.533547 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:10:55.534797 | orchestrator | 2025-05-31 16:10:55.535260 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:10:55.537056 | orchestrator | Saturday 31 May 2025 16:10:55 +0000 (0:00:00.420) 0:00:07.029 ********** 2025-05-31 16:10:55.714971 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:10:55.715038 | orchestrator | 2025-05-31 16:10:55.715051 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:10:55.715064 | orchestrator | Saturday 31 May 2025 16:10:55 +0000 (0:00:00.179) 0:00:07.209 ********** 2025-05-31 16:10:55.899318 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:10:55.901541 | orchestrator | 2025-05-31 16:10:55.901576 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:10:55.901649 | orchestrator | Saturday 31 May 2025 16:10:55 +0000 (0:00:00.185) 0:00:07.394 ********** 2025-05-31 16:10:56.412589 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-31 16:10:56.413082 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-31 16:10:56.413383 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-31 16:10:56.414232 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-31 16:10:56.414693 | orchestrator | 2025-05-31 16:10:56.414964 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:10:56.418151 | orchestrator | Saturday 31 May 2025 16:10:56 +0000 (0:00:00.514) 0:00:07.909 ********** 2025-05-31 16:10:56.572294 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:10:56.574209 | orchestrator | 2025-05-31 16:10:56.574771 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:10:56.575349 | orchestrator | Saturday 31 May 2025 16:10:56 +0000 (0:00:00.155) 0:00:08.065 ********** 2025-05-31 16:10:56.746853 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:10:56.747296 | orchestrator | 2025-05-31 16:10:56.749428 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:10:56.749762 | orchestrator | Saturday 31 May 2025 16:10:56 +0000 (0:00:00.176) 0:00:08.241 ********** 2025-05-31 16:10:56.917683 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:10:56.919483 | orchestrator | 2025-05-31 16:10:56.919727 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:10:56.919975 | orchestrator | Saturday 31 May 2025 16:10:56 +0000 (0:00:00.171) 0:00:08.412 ********** 2025-05-31 16:10:57.100230 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:10:57.100408 | orchestrator | 2025-05-31 16:10:57.101586 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-31 16:10:57.102639 | orchestrator | Saturday 31 May 2025 16:10:57 +0000 (0:00:00.180) 0:00:08.593 ********** 2025-05-31 16:10:57.241447 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-05-31 16:10:57.244654 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-05-31 16:10:57.245351 | orchestrator | 2025-05-31 16:10:57.246247 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-31 16:10:57.247229 | orchestrator | Saturday 31 May 2025 16:10:57 +0000 (0:00:00.141) 0:00:08.735 ********** 2025-05-31 16:10:57.364372 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:10:57.370069 | orchestrator | 2025-05-31 16:10:57.370687 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-31 16:10:57.371537 | orchestrator | Saturday 31 May 2025 16:10:57 +0000 (0:00:00.124) 0:00:08.860 ********** 2025-05-31 16:10:57.485538 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:10:57.485627 | orchestrator | 2025-05-31 16:10:57.490297 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-31 16:10:57.490626 | orchestrator | Saturday 31 May 2025 16:10:57 +0000 (0:00:00.119) 0:00:08.979 ********** 2025-05-31 16:10:57.721076 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:10:57.721354 | orchestrator | 2025-05-31 16:10:57.724698 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-31 16:10:57.724743 | orchestrator | Saturday 31 May 2025 16:10:57 +0000 (0:00:00.238) 0:00:09.217 ********** 2025-05-31 16:10:57.847535 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:10:57.848465 | orchestrator | 2025-05-31 16:10:57.848680 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-31 16:10:57.849092 | orchestrator | Saturday 31 May 2025 16:10:57 +0000 (0:00:00.126) 0:00:09.344 ********** 2025-05-31 16:10:58.047385 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e43a14fa-64bd-59a3-8350-23173f11027f'}}) 2025-05-31 16:10:58.047618 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '92adfeec-5c5c-5208-b88e-9a01a071247e'}}) 2025-05-31 16:10:58.048063 | orchestrator | 2025-05-31 16:10:58.051670 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-31 16:10:58.053995 | orchestrator | Saturday 31 May 2025 16:10:58 +0000 (0:00:00.192) 0:00:09.536 ********** 2025-05-31 16:10:58.179604 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e43a14fa-64bd-59a3-8350-23173f11027f'}})  2025-05-31 16:10:58.180210 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '92adfeec-5c5c-5208-b88e-9a01a071247e'}})  2025-05-31 16:10:58.180941 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:10:58.181250 | orchestrator | 2025-05-31 16:10:58.183745 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-31 16:10:58.184013 | orchestrator | Saturday 31 May 2025 16:10:58 +0000 (0:00:00.139) 0:00:09.676 ********** 2025-05-31 16:10:58.320854 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e43a14fa-64bd-59a3-8350-23173f11027f'}})  2025-05-31 16:10:58.321788 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '92adfeec-5c5c-5208-b88e-9a01a071247e'}})  2025-05-31 16:10:58.323239 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:10:58.323286 | orchestrator | 2025-05-31 16:10:58.323300 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-31 16:10:58.323313 | orchestrator | Saturday 31 May 2025 16:10:58 +0000 (0:00:00.139) 0:00:09.815 ********** 2025-05-31 16:10:58.464018 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e43a14fa-64bd-59a3-8350-23173f11027f'}})  2025-05-31 16:10:58.464094 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '92adfeec-5c5c-5208-b88e-9a01a071247e'}})  2025-05-31 16:10:58.464106 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:10:58.464180 | orchestrator | 2025-05-31 16:10:58.464193 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-31 16:10:58.464204 | orchestrator | Saturday 31 May 2025 16:10:58 +0000 (0:00:00.141) 0:00:09.956 ********** 2025-05-31 16:10:58.583921 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:10:58.584005 | orchestrator | 2025-05-31 16:10:58.584112 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-31 16:10:58.584288 | orchestrator | Saturday 31 May 2025 16:10:58 +0000 (0:00:00.120) 0:00:10.077 ********** 2025-05-31 16:10:58.709547 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:10:58.709640 | orchestrator | 2025-05-31 16:10:58.709657 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-31 16:10:58.709670 | orchestrator | Saturday 31 May 2025 16:10:58 +0000 (0:00:00.125) 0:00:10.203 ********** 2025-05-31 16:10:58.811441 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:10:58.811544 | orchestrator | 2025-05-31 16:10:58.811561 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-31 16:10:58.811662 | orchestrator | Saturday 31 May 2025 16:10:58 +0000 (0:00:00.104) 0:00:10.308 ********** 2025-05-31 16:10:58.930813 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:10:58.930913 | orchestrator | 2025-05-31 16:10:58.931045 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-31 16:10:58.931067 | orchestrator | Saturday 31 May 2025 16:10:58 +0000 (0:00:00.118) 0:00:10.427 ********** 2025-05-31 16:10:59.051805 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:10:59.051914 | orchestrator | 2025-05-31 16:10:59.052062 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-31 16:10:59.053436 | orchestrator | Saturday 31 May 2025 16:10:59 +0000 (0:00:00.118) 0:00:10.545 ********** 2025-05-31 16:10:59.165369 | orchestrator | ok: [testbed-node-3] => { 2025-05-31 16:10:59.167066 | orchestrator |  "ceph_osd_devices": { 2025-05-31 16:10:59.167503 | orchestrator |  "sdb": { 2025-05-31 16:10:59.167872 | orchestrator |  "osd_lvm_uuid": "e43a14fa-64bd-59a3-8350-23173f11027f" 2025-05-31 16:10:59.168498 | orchestrator |  }, 2025-05-31 16:10:59.171775 | orchestrator |  "sdc": { 2025-05-31 16:10:59.172195 | orchestrator |  "osd_lvm_uuid": "92adfeec-5c5c-5208-b88e-9a01a071247e" 2025-05-31 16:10:59.172380 | orchestrator |  } 2025-05-31 16:10:59.172704 | orchestrator |  } 2025-05-31 16:10:59.172961 | orchestrator | } 2025-05-31 16:10:59.173890 | orchestrator | 2025-05-31 16:10:59.173914 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-31 16:10:59.174193 | orchestrator | Saturday 31 May 2025 16:10:59 +0000 (0:00:00.116) 0:00:10.662 ********** 2025-05-31 16:10:59.424397 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:10:59.426614 | orchestrator | 2025-05-31 16:10:59.427665 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-31 16:10:59.428643 | orchestrator | Saturday 31 May 2025 16:10:59 +0000 (0:00:00.258) 0:00:10.921 ********** 2025-05-31 16:10:59.553340 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:10:59.554387 | orchestrator | 2025-05-31 16:10:59.555277 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-31 16:10:59.555901 | orchestrator | Saturday 31 May 2025 16:10:59 +0000 (0:00:00.127) 0:00:11.048 ********** 2025-05-31 16:10:59.676422 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:10:59.677074 | orchestrator | 2025-05-31 16:10:59.678902 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-31 16:10:59.679456 | orchestrator | Saturday 31 May 2025 16:10:59 +0000 (0:00:00.122) 0:00:11.171 ********** 2025-05-31 16:10:59.944911 | orchestrator | changed: [testbed-node-3] => { 2025-05-31 16:10:59.945010 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-31 16:10:59.945025 | orchestrator |  "ceph_osd_devices": { 2025-05-31 16:10:59.945037 | orchestrator |  "sdb": { 2025-05-31 16:10:59.945049 | orchestrator |  "osd_lvm_uuid": "e43a14fa-64bd-59a3-8350-23173f11027f" 2025-05-31 16:10:59.945061 | orchestrator |  }, 2025-05-31 16:10:59.945162 | orchestrator |  "sdc": { 2025-05-31 16:10:59.945531 | orchestrator |  "osd_lvm_uuid": "92adfeec-5c5c-5208-b88e-9a01a071247e" 2025-05-31 16:10:59.945802 | orchestrator |  } 2025-05-31 16:10:59.948159 | orchestrator |  }, 2025-05-31 16:10:59.948233 | orchestrator |  "lvm_volumes": [ 2025-05-31 16:10:59.948321 | orchestrator |  { 2025-05-31 16:10:59.948772 | orchestrator |  "data": "osd-block-e43a14fa-64bd-59a3-8350-23173f11027f", 2025-05-31 16:10:59.949199 | orchestrator |  "data_vg": "ceph-e43a14fa-64bd-59a3-8350-23173f11027f" 2025-05-31 16:10:59.949519 | orchestrator |  }, 2025-05-31 16:10:59.949958 | orchestrator |  { 2025-05-31 16:10:59.951868 | orchestrator |  "data": "osd-block-92adfeec-5c5c-5208-b88e-9a01a071247e", 2025-05-31 16:10:59.952243 | orchestrator |  "data_vg": "ceph-92adfeec-5c5c-5208-b88e-9a01a071247e" 2025-05-31 16:10:59.952702 | orchestrator |  } 2025-05-31 16:10:59.953072 | orchestrator |  ] 2025-05-31 16:10:59.956096 | orchestrator |  } 2025-05-31 16:10:59.956190 | orchestrator | } 2025-05-31 16:10:59.956584 | orchestrator | 2025-05-31 16:10:59.956903 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-31 16:10:59.957430 | orchestrator | Saturday 31 May 2025 16:10:59 +0000 (0:00:00.267) 0:00:11.438 ********** 2025-05-31 16:11:01.697872 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-31 16:11:01.699271 | orchestrator | 2025-05-31 16:11:01.700498 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-31 16:11:01.701492 | orchestrator | 2025-05-31 16:11:01.702243 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-31 16:11:01.703406 | orchestrator | Saturday 31 May 2025 16:11:01 +0000 (0:00:01.751) 0:00:13.190 ********** 2025-05-31 16:11:01.981815 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-31 16:11:01.982809 | orchestrator | 2025-05-31 16:11:01.983225 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-31 16:11:01.983801 | orchestrator | Saturday 31 May 2025 16:11:01 +0000 (0:00:00.287) 0:00:13.477 ********** 2025-05-31 16:11:02.227075 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:11:02.231344 | orchestrator | 2025-05-31 16:11:02.232786 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:11:02.234392 | orchestrator | Saturday 31 May 2025 16:11:02 +0000 (0:00:00.244) 0:00:13.721 ********** 2025-05-31 16:11:02.635579 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-31 16:11:02.636070 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-31 16:11:02.636775 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-31 16:11:02.637257 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-31 16:11:02.641004 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-31 16:11:02.641346 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-31 16:11:02.641611 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-31 16:11:02.642082 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-31 16:11:02.642484 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-31 16:11:02.643098 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-31 16:11:02.643750 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-31 16:11:02.643815 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-31 16:11:02.644318 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-31 16:11:02.644883 | orchestrator | 2025-05-31 16:11:02.645243 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:11:02.645739 | orchestrator | Saturday 31 May 2025 16:11:02 +0000 (0:00:00.409) 0:00:14.131 ********** 2025-05-31 16:11:02.832258 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:11:02.832548 | orchestrator | 2025-05-31 16:11:02.833226 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:11:02.833748 | orchestrator | Saturday 31 May 2025 16:11:02 +0000 (0:00:00.197) 0:00:14.328 ********** 2025-05-31 16:11:03.024560 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:11:03.024913 | orchestrator | 2025-05-31 16:11:03.025771 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:11:03.026220 | orchestrator | Saturday 31 May 2025 16:11:03 +0000 (0:00:00.192) 0:00:14.520 ********** 2025-05-31 16:11:03.206504 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:11:03.206962 | orchestrator | 2025-05-31 16:11:03.207928 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:11:03.208202 | orchestrator | Saturday 31 May 2025 16:11:03 +0000 (0:00:00.181) 0:00:14.702 ********** 2025-05-31 16:11:03.398415 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:11:03.398595 | orchestrator | 2025-05-31 16:11:03.398860 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:11:03.398954 | orchestrator | Saturday 31 May 2025 16:11:03 +0000 (0:00:00.191) 0:00:14.894 ********** 2025-05-31 16:11:03.614906 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:11:03.615759 | orchestrator | 2025-05-31 16:11:03.616190 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:11:03.618721 | orchestrator | Saturday 31 May 2025 16:11:03 +0000 (0:00:00.215) 0:00:15.110 ********** 2025-05-31 16:11:04.034197 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:11:04.034876 | orchestrator | 2025-05-31 16:11:04.035717 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:11:04.039095 | orchestrator | Saturday 31 May 2025 16:11:04 +0000 (0:00:00.418) 0:00:15.528 ********** 2025-05-31 16:11:04.220700 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:11:04.221832 | orchestrator | 2025-05-31 16:11:04.223309 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:11:04.223973 | orchestrator | Saturday 31 May 2025 16:11:04 +0000 (0:00:00.184) 0:00:15.713 ********** 2025-05-31 16:11:04.399193 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:11:04.399254 | orchestrator | 2025-05-31 16:11:04.399619 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:11:04.401359 | orchestrator | Saturday 31 May 2025 16:11:04 +0000 (0:00:00.179) 0:00:15.893 ********** 2025-05-31 16:11:04.778419 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ec58de5b-e589-4299-a3da-23bfa2e4200b) 2025-05-31 16:11:04.778784 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ec58de5b-e589-4299-a3da-23bfa2e4200b) 2025-05-31 16:11:04.778820 | orchestrator | 2025-05-31 16:11:04.780784 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:11:04.780812 | orchestrator | Saturday 31 May 2025 16:11:04 +0000 (0:00:00.380) 0:00:16.273 ********** 2025-05-31 16:11:05.128189 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ddb9130a-4326-47f1-9e34-5e5625e80e81) 2025-05-31 16:11:05.128471 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ddb9130a-4326-47f1-9e34-5e5625e80e81) 2025-05-31 16:11:05.130577 | orchestrator | 2025-05-31 16:11:05.131095 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:11:05.131607 | orchestrator | Saturday 31 May 2025 16:11:05 +0000 (0:00:00.350) 0:00:16.623 ********** 2025-05-31 16:11:05.509584 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5b4bcfff-b004-43b2-aa93-003eb1863ed5) 2025-05-31 16:11:05.511471 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5b4bcfff-b004-43b2-aa93-003eb1863ed5) 2025-05-31 16:11:05.511503 | orchestrator | 2025-05-31 16:11:05.511926 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:11:05.512930 | orchestrator | Saturday 31 May 2025 16:11:05 +0000 (0:00:00.381) 0:00:17.005 ********** 2025-05-31 16:11:05.895291 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f41ad415-f570-4f0b-8f25-7db49ff0cbfa) 2025-05-31 16:11:05.896849 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f41ad415-f570-4f0b-8f25-7db49ff0cbfa) 2025-05-31 16:11:05.897071 | orchestrator | 2025-05-31 16:11:05.898174 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:11:05.899680 | orchestrator | Saturday 31 May 2025 16:11:05 +0000 (0:00:00.386) 0:00:17.391 ********** 2025-05-31 16:11:06.186347 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-31 16:11:06.186441 | orchestrator | 2025-05-31 16:11:06.187665 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:11:06.187688 | orchestrator | Saturday 31 May 2025 16:11:06 +0000 (0:00:00.290) 0:00:17.682 ********** 2025-05-31 16:11:06.527291 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-31 16:11:06.527459 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-31 16:11:06.528322 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-31 16:11:06.529265 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-31 16:11:06.529862 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-31 16:11:06.531352 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-31 16:11:06.531929 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-31 16:11:06.532502 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-31 16:11:06.532700 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-31 16:11:06.533249 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-31 16:11:06.533727 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-31 16:11:06.534237 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-31 16:11:06.534640 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-31 16:11:06.535082 | orchestrator | 2025-05-31 16:11:06.535557 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:11:06.535841 | orchestrator | Saturday 31 May 2025 16:11:06 +0000 (0:00:00.341) 0:00:18.023 ********** 2025-05-31 16:11:07.061592 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:11:07.061770 | orchestrator | 2025-05-31 16:11:07.062169 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:11:07.062537 | orchestrator | Saturday 31 May 2025 16:11:07 +0000 (0:00:00.531) 0:00:18.555 ********** 2025-05-31 16:11:07.274642 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:11:07.275056 | orchestrator | 2025-05-31 16:11:07.276405 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:11:07.277343 | orchestrator | Saturday 31 May 2025 16:11:07 +0000 (0:00:00.213) 0:00:18.768 ********** 2025-05-31 16:11:07.482720 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:11:07.483184 | orchestrator | 2025-05-31 16:11:07.483665 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:11:07.484184 | orchestrator | Saturday 31 May 2025 16:11:07 +0000 (0:00:00.210) 0:00:18.978 ********** 2025-05-31 16:11:07.706453 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:11:07.706575 | orchestrator | 2025-05-31 16:11:07.706690 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:11:07.706830 | orchestrator | Saturday 31 May 2025 16:11:07 +0000 (0:00:00.222) 0:00:19.202 ********** 2025-05-31 16:11:07.905502 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:11:07.905692 | orchestrator | 2025-05-31 16:11:07.906625 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:11:07.906914 | orchestrator | Saturday 31 May 2025 16:11:07 +0000 (0:00:00.199) 0:00:19.401 ********** 2025-05-31 16:11:08.108463 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:11:08.109348 | orchestrator | 2025-05-31 16:11:08.109762 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:11:08.112903 | orchestrator | Saturday 31 May 2025 16:11:08 +0000 (0:00:00.201) 0:00:19.602 ********** 2025-05-31 16:11:08.310407 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:11:08.310565 | orchestrator | 2025-05-31 16:11:08.311400 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:11:08.312408 | orchestrator | Saturday 31 May 2025 16:11:08 +0000 (0:00:00.201) 0:00:19.804 ********** 2025-05-31 16:11:08.503038 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:11:08.503155 | orchestrator | 2025-05-31 16:11:08.503337 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:11:08.503797 | orchestrator | Saturday 31 May 2025 16:11:08 +0000 (0:00:00.192) 0:00:19.996 ********** 2025-05-31 16:11:09.378661 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-31 16:11:09.378805 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-31 16:11:09.378884 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-31 16:11:09.380106 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-31 16:11:09.381308 | orchestrator | 2025-05-31 16:11:09.382075 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:11:09.382798 | orchestrator | Saturday 31 May 2025 16:11:09 +0000 (0:00:00.872) 0:00:20.869 ********** 2025-05-31 16:11:09.574526 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:11:09.579648 | orchestrator | 2025-05-31 16:11:09.580574 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:11:09.581176 | orchestrator | Saturday 31 May 2025 16:11:09 +0000 (0:00:00.198) 0:00:21.067 ********** 2025-05-31 16:11:10.176042 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:11:10.178363 | orchestrator | 2025-05-31 16:11:10.179354 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:11:10.180911 | orchestrator | Saturday 31 May 2025 16:11:10 +0000 (0:00:00.602) 0:00:21.670 ********** 2025-05-31 16:11:10.392112 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:11:10.392308 | orchestrator | 2025-05-31 16:11:10.392864 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:11:10.394402 | orchestrator | Saturday 31 May 2025 16:11:10 +0000 (0:00:00.215) 0:00:21.885 ********** 2025-05-31 16:11:10.602975 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:11:10.604077 | orchestrator | 2025-05-31 16:11:10.604911 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-31 16:11:10.605598 | orchestrator | Saturday 31 May 2025 16:11:10 +0000 (0:00:00.210) 0:00:22.095 ********** 2025-05-31 16:11:10.780006 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-05-31 16:11:10.780223 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-05-31 16:11:10.781249 | orchestrator | 2025-05-31 16:11:10.781947 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-31 16:11:10.782611 | orchestrator | Saturday 31 May 2025 16:11:10 +0000 (0:00:00.179) 0:00:22.275 ********** 2025-05-31 16:11:10.914621 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:11:10.914793 | orchestrator | 2025-05-31 16:11:10.915577 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-31 16:11:10.916060 | orchestrator | Saturday 31 May 2025 16:11:10 +0000 (0:00:00.132) 0:00:22.407 ********** 2025-05-31 16:11:11.116581 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:11:11.116776 | orchestrator | 2025-05-31 16:11:11.117167 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-31 16:11:11.117534 | orchestrator | Saturday 31 May 2025 16:11:11 +0000 (0:00:00.200) 0:00:22.608 ********** 2025-05-31 16:11:11.239992 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:11:11.240069 | orchestrator | 2025-05-31 16:11:11.241003 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-31 16:11:11.241692 | orchestrator | Saturday 31 May 2025 16:11:11 +0000 (0:00:00.126) 0:00:22.734 ********** 2025-05-31 16:11:11.378218 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:11:11.379640 | orchestrator | 2025-05-31 16:11:11.381632 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-31 16:11:11.381667 | orchestrator | Saturday 31 May 2025 16:11:11 +0000 (0:00:00.139) 0:00:22.873 ********** 2025-05-31 16:11:11.541676 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ad7aff40-0fc1-546d-9ec3-a4c69926416d'}}) 2025-05-31 16:11:11.542684 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '02409adc-b936-5a4c-b212-7809fa63c72a'}}) 2025-05-31 16:11:11.543716 | orchestrator | 2025-05-31 16:11:11.545370 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-31 16:11:11.545962 | orchestrator | Saturday 31 May 2025 16:11:11 +0000 (0:00:00.163) 0:00:23.037 ********** 2025-05-31 16:11:11.692952 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ad7aff40-0fc1-546d-9ec3-a4c69926416d'}})  2025-05-31 16:11:11.693903 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '02409adc-b936-5a4c-b212-7809fa63c72a'}})  2025-05-31 16:11:11.694682 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:11:11.697837 | orchestrator | 2025-05-31 16:11:11.697865 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-31 16:11:11.697880 | orchestrator | Saturday 31 May 2025 16:11:11 +0000 (0:00:00.151) 0:00:23.188 ********** 2025-05-31 16:11:11.854079 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ad7aff40-0fc1-546d-9ec3-a4c69926416d'}})  2025-05-31 16:11:11.854410 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '02409adc-b936-5a4c-b212-7809fa63c72a'}})  2025-05-31 16:11:11.855688 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:11:11.856656 | orchestrator | 2025-05-31 16:11:11.857554 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-31 16:11:11.857968 | orchestrator | Saturday 31 May 2025 16:11:11 +0000 (0:00:00.160) 0:00:23.349 ********** 2025-05-31 16:11:12.168175 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ad7aff40-0fc1-546d-9ec3-a4c69926416d'}})  2025-05-31 16:11:12.168851 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '02409adc-b936-5a4c-b212-7809fa63c72a'}})  2025-05-31 16:11:12.170103 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:11:12.171011 | orchestrator | 2025-05-31 16:11:12.171547 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-31 16:11:12.172102 | orchestrator | Saturday 31 May 2025 16:11:12 +0000 (0:00:00.312) 0:00:23.661 ********** 2025-05-31 16:11:12.305313 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:11:12.305636 | orchestrator | 2025-05-31 16:11:12.306877 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-31 16:11:12.309169 | orchestrator | Saturday 31 May 2025 16:11:12 +0000 (0:00:00.138) 0:00:23.800 ********** 2025-05-31 16:11:12.442639 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:11:12.443307 | orchestrator | 2025-05-31 16:11:12.444484 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-31 16:11:12.446346 | orchestrator | Saturday 31 May 2025 16:11:12 +0000 (0:00:00.137) 0:00:23.937 ********** 2025-05-31 16:11:12.572535 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:11:12.573289 | orchestrator | 2025-05-31 16:11:12.573967 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-31 16:11:12.576775 | orchestrator | Saturday 31 May 2025 16:11:12 +0000 (0:00:00.130) 0:00:24.068 ********** 2025-05-31 16:11:12.707674 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:11:12.707732 | orchestrator | 2025-05-31 16:11:12.709976 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-31 16:11:12.710205 | orchestrator | Saturday 31 May 2025 16:11:12 +0000 (0:00:00.132) 0:00:24.200 ********** 2025-05-31 16:11:12.836242 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:11:12.836875 | orchestrator | 2025-05-31 16:11:12.837847 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-31 16:11:12.840386 | orchestrator | Saturday 31 May 2025 16:11:12 +0000 (0:00:00.131) 0:00:24.331 ********** 2025-05-31 16:11:12.977894 | orchestrator | ok: [testbed-node-4] => { 2025-05-31 16:11:12.978436 | orchestrator |  "ceph_osd_devices": { 2025-05-31 16:11:12.980331 | orchestrator |  "sdb": { 2025-05-31 16:11:12.982735 | orchestrator |  "osd_lvm_uuid": "ad7aff40-0fc1-546d-9ec3-a4c69926416d" 2025-05-31 16:11:12.982807 | orchestrator |  }, 2025-05-31 16:11:12.982823 | orchestrator |  "sdc": { 2025-05-31 16:11:12.983509 | orchestrator |  "osd_lvm_uuid": "02409adc-b936-5a4c-b212-7809fa63c72a" 2025-05-31 16:11:12.984265 | orchestrator |  } 2025-05-31 16:11:12.985058 | orchestrator |  } 2025-05-31 16:11:12.985705 | orchestrator | } 2025-05-31 16:11:12.986113 | orchestrator | 2025-05-31 16:11:12.987034 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-31 16:11:12.988938 | orchestrator | Saturday 31 May 2025 16:11:12 +0000 (0:00:00.141) 0:00:24.473 ********** 2025-05-31 16:11:13.105633 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:11:13.106352 | orchestrator | 2025-05-31 16:11:13.107508 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-31 16:11:13.109930 | orchestrator | Saturday 31 May 2025 16:11:13 +0000 (0:00:00.127) 0:00:24.600 ********** 2025-05-31 16:11:13.239462 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:11:13.239789 | orchestrator | 2025-05-31 16:11:13.240831 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-31 16:11:13.241729 | orchestrator | Saturday 31 May 2025 16:11:13 +0000 (0:00:00.133) 0:00:24.734 ********** 2025-05-31 16:11:13.374256 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:11:13.375159 | orchestrator | 2025-05-31 16:11:13.376011 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-31 16:11:13.376487 | orchestrator | Saturday 31 May 2025 16:11:13 +0000 (0:00:00.135) 0:00:24.870 ********** 2025-05-31 16:11:13.801025 | orchestrator | changed: [testbed-node-4] => { 2025-05-31 16:11:13.801470 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-31 16:11:13.802072 | orchestrator |  "ceph_osd_devices": { 2025-05-31 16:11:13.803224 | orchestrator |  "sdb": { 2025-05-31 16:11:13.803975 | orchestrator |  "osd_lvm_uuid": "ad7aff40-0fc1-546d-9ec3-a4c69926416d" 2025-05-31 16:11:13.804720 | orchestrator |  }, 2025-05-31 16:11:13.805530 | orchestrator |  "sdc": { 2025-05-31 16:11:13.806608 | orchestrator |  "osd_lvm_uuid": "02409adc-b936-5a4c-b212-7809fa63c72a" 2025-05-31 16:11:13.807496 | orchestrator |  } 2025-05-31 16:11:13.808506 | orchestrator |  }, 2025-05-31 16:11:13.808529 | orchestrator |  "lvm_volumes": [ 2025-05-31 16:11:13.808991 | orchestrator |  { 2025-05-31 16:11:13.809159 | orchestrator |  "data": "osd-block-ad7aff40-0fc1-546d-9ec3-a4c69926416d", 2025-05-31 16:11:13.809448 | orchestrator |  "data_vg": "ceph-ad7aff40-0fc1-546d-9ec3-a4c69926416d" 2025-05-31 16:11:13.809928 | orchestrator |  }, 2025-05-31 16:11:13.810236 | orchestrator |  { 2025-05-31 16:11:13.811339 | orchestrator |  "data": "osd-block-02409adc-b936-5a4c-b212-7809fa63c72a", 2025-05-31 16:11:13.812238 | orchestrator |  "data_vg": "ceph-02409adc-b936-5a4c-b212-7809fa63c72a" 2025-05-31 16:11:13.812572 | orchestrator |  } 2025-05-31 16:11:13.812977 | orchestrator |  ] 2025-05-31 16:11:13.813239 | orchestrator |  } 2025-05-31 16:11:13.813776 | orchestrator | } 2025-05-31 16:11:13.814235 | orchestrator | 2025-05-31 16:11:13.814719 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-31 16:11:13.815047 | orchestrator | Saturday 31 May 2025 16:11:13 +0000 (0:00:00.421) 0:00:25.291 ********** 2025-05-31 16:11:15.086277 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-31 16:11:15.086805 | orchestrator | 2025-05-31 16:11:15.086835 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-31 16:11:15.088010 | orchestrator | 2025-05-31 16:11:15.089235 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-31 16:11:15.089339 | orchestrator | Saturday 31 May 2025 16:11:15 +0000 (0:00:01.289) 0:00:26.580 ********** 2025-05-31 16:11:15.330442 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-31 16:11:15.331411 | orchestrator | 2025-05-31 16:11:15.331444 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-31 16:11:15.332372 | orchestrator | Saturday 31 May 2025 16:11:15 +0000 (0:00:00.243) 0:00:26.824 ********** 2025-05-31 16:11:15.570719 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:11:15.570801 | orchestrator | 2025-05-31 16:11:15.571107 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:11:15.571715 | orchestrator | Saturday 31 May 2025 16:11:15 +0000 (0:00:00.240) 0:00:27.065 ********** 2025-05-31 16:11:16.088317 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-31 16:11:16.088416 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-31 16:11:16.089943 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-31 16:11:16.089987 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-31 16:11:16.090753 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-31 16:11:16.094380 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-31 16:11:16.094519 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-31 16:11:16.095217 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-31 16:11:16.095747 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-31 16:11:16.096571 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-31 16:11:16.097052 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-31 16:11:16.097622 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-31 16:11:16.098097 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-31 16:11:16.098891 | orchestrator | 2025-05-31 16:11:16.099162 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:11:16.100405 | orchestrator | Saturday 31 May 2025 16:11:16 +0000 (0:00:00.517) 0:00:27.582 ********** 2025-05-31 16:11:16.287618 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:11:16.287733 | orchestrator | 2025-05-31 16:11:16.288347 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:11:16.288931 | orchestrator | Saturday 31 May 2025 16:11:16 +0000 (0:00:00.198) 0:00:27.781 ********** 2025-05-31 16:11:16.475094 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:11:16.475606 | orchestrator | 2025-05-31 16:11:16.476321 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:11:16.477116 | orchestrator | Saturday 31 May 2025 16:11:16 +0000 (0:00:00.189) 0:00:27.970 ********** 2025-05-31 16:11:16.669472 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:11:16.670351 | orchestrator | 2025-05-31 16:11:16.670731 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:11:16.671818 | orchestrator | Saturday 31 May 2025 16:11:16 +0000 (0:00:00.194) 0:00:28.165 ********** 2025-05-31 16:11:16.856699 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:11:16.857806 | orchestrator | 2025-05-31 16:11:16.857976 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:11:16.858761 | orchestrator | Saturday 31 May 2025 16:11:16 +0000 (0:00:00.187) 0:00:28.352 ********** 2025-05-31 16:11:17.057476 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:11:17.058785 | orchestrator | 2025-05-31 16:11:17.059879 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:11:17.062365 | orchestrator | Saturday 31 May 2025 16:11:17 +0000 (0:00:00.200) 0:00:28.552 ********** 2025-05-31 16:11:17.261473 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:11:17.261570 | orchestrator | 2025-05-31 16:11:17.262950 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:11:17.264408 | orchestrator | Saturday 31 May 2025 16:11:17 +0000 (0:00:00.201) 0:00:28.753 ********** 2025-05-31 16:11:17.459120 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:11:17.459402 | orchestrator | 2025-05-31 16:11:17.460446 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:11:17.461263 | orchestrator | Saturday 31 May 2025 16:11:17 +0000 (0:00:00.200) 0:00:28.954 ********** 2025-05-31 16:11:17.643642 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:11:17.643839 | orchestrator | 2025-05-31 16:11:17.644258 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:11:17.645127 | orchestrator | Saturday 31 May 2025 16:11:17 +0000 (0:00:00.183) 0:00:29.137 ********** 2025-05-31 16:11:18.228583 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4c2c2026-b533-49c5-9163-99a4bbff9cf3) 2025-05-31 16:11:18.228953 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4c2c2026-b533-49c5-9163-99a4bbff9cf3) 2025-05-31 16:11:18.229233 | orchestrator | 2025-05-31 16:11:18.229981 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:11:18.231618 | orchestrator | Saturday 31 May 2025 16:11:18 +0000 (0:00:00.584) 0:00:29.722 ********** 2025-05-31 16:11:18.843539 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_88f31b31-d9a0-4986-b3f2-c890facc2af6) 2025-05-31 16:11:18.844440 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_88f31b31-d9a0-4986-b3f2-c890facc2af6) 2025-05-31 16:11:18.845669 | orchestrator | 2025-05-31 16:11:18.846908 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:11:18.847384 | orchestrator | Saturday 31 May 2025 16:11:18 +0000 (0:00:00.616) 0:00:30.338 ********** 2025-05-31 16:11:19.472775 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_42e25697-c4c6-4260-bea6-0d0d8bf43604) 2025-05-31 16:11:19.473051 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_42e25697-c4c6-4260-bea6-0d0d8bf43604) 2025-05-31 16:11:19.474375 | orchestrator | 2025-05-31 16:11:19.475108 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:11:19.475885 | orchestrator | Saturday 31 May 2025 16:11:19 +0000 (0:00:00.627) 0:00:30.966 ********** 2025-05-31 16:11:19.877585 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_86312f09-330b-437e-8315-0e2c008d5fbe) 2025-05-31 16:11:19.877866 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_86312f09-330b-437e-8315-0e2c008d5fbe) 2025-05-31 16:11:19.878318 | orchestrator | 2025-05-31 16:11:19.881125 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:11:19.881755 | orchestrator | Saturday 31 May 2025 16:11:19 +0000 (0:00:00.406) 0:00:31.372 ********** 2025-05-31 16:11:20.194633 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-31 16:11:20.194888 | orchestrator | 2025-05-31 16:11:20.196792 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:11:20.198911 | orchestrator | Saturday 31 May 2025 16:11:20 +0000 (0:00:00.317) 0:00:31.689 ********** 2025-05-31 16:11:20.586522 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-31 16:11:20.586909 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-31 16:11:20.588542 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-31 16:11:20.589197 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-31 16:11:20.590855 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-31 16:11:20.591857 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-31 16:11:20.592549 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-31 16:11:20.593261 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-31 16:11:20.594111 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-31 16:11:20.594597 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-31 16:11:20.595035 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-31 16:11:20.595830 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-31 16:11:20.596307 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-31 16:11:20.597119 | orchestrator | 2025-05-31 16:11:20.597711 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:11:20.598313 | orchestrator | Saturday 31 May 2025 16:11:20 +0000 (0:00:00.391) 0:00:32.080 ********** 2025-05-31 16:11:20.787783 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:11:20.788688 | orchestrator | 2025-05-31 16:11:20.789930 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:11:20.791075 | orchestrator | Saturday 31 May 2025 16:11:20 +0000 (0:00:00.200) 0:00:32.281 ********** 2025-05-31 16:11:20.984717 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:11:20.985801 | orchestrator | 2025-05-31 16:11:20.986622 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:11:20.987304 | orchestrator | Saturday 31 May 2025 16:11:20 +0000 (0:00:00.198) 0:00:32.479 ********** 2025-05-31 16:11:21.182604 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:11:21.183378 | orchestrator | 2025-05-31 16:11:21.184252 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:11:21.184993 | orchestrator | Saturday 31 May 2025 16:11:21 +0000 (0:00:00.196) 0:00:32.676 ********** 2025-05-31 16:11:21.377588 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:11:21.378594 | orchestrator | 2025-05-31 16:11:21.379629 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:11:21.380240 | orchestrator | Saturday 31 May 2025 16:11:21 +0000 (0:00:00.195) 0:00:32.871 ********** 2025-05-31 16:11:21.563022 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:11:21.563984 | orchestrator | 2025-05-31 16:11:21.566286 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:11:21.567227 | orchestrator | Saturday 31 May 2025 16:11:21 +0000 (0:00:00.185) 0:00:33.057 ********** 2025-05-31 16:11:22.078323 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:11:22.078909 | orchestrator | 2025-05-31 16:11:22.079555 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:11:22.081632 | orchestrator | Saturday 31 May 2025 16:11:22 +0000 (0:00:00.514) 0:00:33.571 ********** 2025-05-31 16:11:22.276820 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:11:22.278755 | orchestrator | 2025-05-31 16:11:22.280279 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:11:22.280701 | orchestrator | Saturday 31 May 2025 16:11:22 +0000 (0:00:00.199) 0:00:33.771 ********** 2025-05-31 16:11:22.474971 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:11:22.475569 | orchestrator | 2025-05-31 16:11:22.476567 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:11:22.477437 | orchestrator | Saturday 31 May 2025 16:11:22 +0000 (0:00:00.198) 0:00:33.969 ********** 2025-05-31 16:11:23.087194 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-31 16:11:23.087901 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-31 16:11:23.089116 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-31 16:11:23.090906 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-31 16:11:23.091547 | orchestrator | 2025-05-31 16:11:23.092592 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:11:23.094897 | orchestrator | Saturday 31 May 2025 16:11:23 +0000 (0:00:00.612) 0:00:34.581 ********** 2025-05-31 16:11:23.284013 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:11:23.284592 | orchestrator | 2025-05-31 16:11:23.285250 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:11:23.285826 | orchestrator | Saturday 31 May 2025 16:11:23 +0000 (0:00:00.196) 0:00:34.778 ********** 2025-05-31 16:11:23.496000 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:11:23.496205 | orchestrator | 2025-05-31 16:11:23.497450 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:11:23.499262 | orchestrator | Saturday 31 May 2025 16:11:23 +0000 (0:00:00.211) 0:00:34.990 ********** 2025-05-31 16:11:23.687104 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:11:23.688579 | orchestrator | 2025-05-31 16:11:23.689628 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:11:23.690320 | orchestrator | Saturday 31 May 2025 16:11:23 +0000 (0:00:00.192) 0:00:35.182 ********** 2025-05-31 16:11:23.883769 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:11:23.884615 | orchestrator | 2025-05-31 16:11:23.885240 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-31 16:11:23.886168 | orchestrator | Saturday 31 May 2025 16:11:23 +0000 (0:00:00.195) 0:00:35.378 ********** 2025-05-31 16:11:24.083965 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-05-31 16:11:24.085427 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-05-31 16:11:24.087643 | orchestrator | 2025-05-31 16:11:24.092705 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-31 16:11:24.094110 | orchestrator | Saturday 31 May 2025 16:11:24 +0000 (0:00:00.200) 0:00:35.579 ********** 2025-05-31 16:11:24.212118 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:11:24.214238 | orchestrator | 2025-05-31 16:11:24.214967 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-31 16:11:24.215849 | orchestrator | Saturday 31 May 2025 16:11:24 +0000 (0:00:00.127) 0:00:35.706 ********** 2025-05-31 16:11:24.355496 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:11:24.355555 | orchestrator | 2025-05-31 16:11:24.355569 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-31 16:11:24.355582 | orchestrator | Saturday 31 May 2025 16:11:24 +0000 (0:00:00.139) 0:00:35.846 ********** 2025-05-31 16:11:24.667104 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:11:24.667918 | orchestrator | 2025-05-31 16:11:24.669090 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-31 16:11:24.669930 | orchestrator | Saturday 31 May 2025 16:11:24 +0000 (0:00:00.315) 0:00:36.161 ********** 2025-05-31 16:11:24.803961 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:11:24.805234 | orchestrator | 2025-05-31 16:11:24.806711 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-31 16:11:24.807902 | orchestrator | Saturday 31 May 2025 16:11:24 +0000 (0:00:00.136) 0:00:36.298 ********** 2025-05-31 16:11:24.975450 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6a818804-e2a7-5d8b-beae-a4acf44277a5'}}) 2025-05-31 16:11:24.976105 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8b45f5b5-5599-560e-b955-f5f9e148b85f'}}) 2025-05-31 16:11:24.977090 | orchestrator | 2025-05-31 16:11:24.978869 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-31 16:11:24.979087 | orchestrator | Saturday 31 May 2025 16:11:24 +0000 (0:00:00.172) 0:00:36.471 ********** 2025-05-31 16:11:25.133391 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6a818804-e2a7-5d8b-beae-a4acf44277a5'}})  2025-05-31 16:11:25.134263 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8b45f5b5-5599-560e-b955-f5f9e148b85f'}})  2025-05-31 16:11:25.137850 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:11:25.137925 | orchestrator | 2025-05-31 16:11:25.137939 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-31 16:11:25.137952 | orchestrator | Saturday 31 May 2025 16:11:25 +0000 (0:00:00.155) 0:00:36.626 ********** 2025-05-31 16:11:25.313926 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6a818804-e2a7-5d8b-beae-a4acf44277a5'}})  2025-05-31 16:11:25.314128 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8b45f5b5-5599-560e-b955-f5f9e148b85f'}})  2025-05-31 16:11:25.314166 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:11:25.314178 | orchestrator | 2025-05-31 16:11:25.314284 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-31 16:11:25.314299 | orchestrator | Saturday 31 May 2025 16:11:25 +0000 (0:00:00.177) 0:00:36.804 ********** 2025-05-31 16:11:25.460225 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6a818804-e2a7-5d8b-beae-a4acf44277a5'}})  2025-05-31 16:11:25.461069 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8b45f5b5-5599-560e-b955-f5f9e148b85f'}})  2025-05-31 16:11:25.461694 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:11:25.463786 | orchestrator | 2025-05-31 16:11:25.463812 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-31 16:11:25.463826 | orchestrator | Saturday 31 May 2025 16:11:25 +0000 (0:00:00.149) 0:00:36.954 ********** 2025-05-31 16:11:25.605168 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:11:25.606005 | orchestrator | 2025-05-31 16:11:25.611062 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-31 16:11:25.611500 | orchestrator | Saturday 31 May 2025 16:11:25 +0000 (0:00:00.144) 0:00:37.098 ********** 2025-05-31 16:11:25.747344 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:11:25.748292 | orchestrator | 2025-05-31 16:11:25.749188 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-31 16:11:25.750119 | orchestrator | Saturday 31 May 2025 16:11:25 +0000 (0:00:00.143) 0:00:37.242 ********** 2025-05-31 16:11:25.887625 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:11:25.888032 | orchestrator | 2025-05-31 16:11:25.888721 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-31 16:11:25.889053 | orchestrator | Saturday 31 May 2025 16:11:25 +0000 (0:00:00.140) 0:00:37.382 ********** 2025-05-31 16:11:26.016873 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:11:26.017025 | orchestrator | 2025-05-31 16:11:26.017635 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-31 16:11:26.017739 | orchestrator | Saturday 31 May 2025 16:11:26 +0000 (0:00:00.128) 0:00:37.511 ********** 2025-05-31 16:11:26.145074 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:11:26.145399 | orchestrator | 2025-05-31 16:11:26.146875 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-31 16:11:26.147781 | orchestrator | Saturday 31 May 2025 16:11:26 +0000 (0:00:00.127) 0:00:37.639 ********** 2025-05-31 16:11:26.285606 | orchestrator | ok: [testbed-node-5] => { 2025-05-31 16:11:26.285966 | orchestrator |  "ceph_osd_devices": { 2025-05-31 16:11:26.287648 | orchestrator |  "sdb": { 2025-05-31 16:11:26.290132 | orchestrator |  "osd_lvm_uuid": "6a818804-e2a7-5d8b-beae-a4acf44277a5" 2025-05-31 16:11:26.290182 | orchestrator |  }, 2025-05-31 16:11:26.290199 | orchestrator |  "sdc": { 2025-05-31 16:11:26.290210 | orchestrator |  "osd_lvm_uuid": "8b45f5b5-5599-560e-b955-f5f9e148b85f" 2025-05-31 16:11:26.291113 | orchestrator |  } 2025-05-31 16:11:26.291765 | orchestrator |  } 2025-05-31 16:11:26.292585 | orchestrator | } 2025-05-31 16:11:26.293175 | orchestrator | 2025-05-31 16:11:26.294003 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-31 16:11:26.294495 | orchestrator | Saturday 31 May 2025 16:11:26 +0000 (0:00:00.140) 0:00:37.780 ********** 2025-05-31 16:11:26.591815 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:11:26.591982 | orchestrator | 2025-05-31 16:11:26.592812 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-31 16:11:26.595255 | orchestrator | Saturday 31 May 2025 16:11:26 +0000 (0:00:00.304) 0:00:38.085 ********** 2025-05-31 16:11:26.732535 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:11:26.732965 | orchestrator | 2025-05-31 16:11:26.734010 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-31 16:11:26.734794 | orchestrator | Saturday 31 May 2025 16:11:26 +0000 (0:00:00.142) 0:00:38.227 ********** 2025-05-31 16:11:26.865450 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:11:26.866555 | orchestrator | 2025-05-31 16:11:26.867896 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-31 16:11:26.869202 | orchestrator | Saturday 31 May 2025 16:11:26 +0000 (0:00:00.132) 0:00:38.359 ********** 2025-05-31 16:11:27.144338 | orchestrator | changed: [testbed-node-5] => { 2025-05-31 16:11:27.145865 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-31 16:11:27.145889 | orchestrator |  "ceph_osd_devices": { 2025-05-31 16:11:27.147248 | orchestrator |  "sdb": { 2025-05-31 16:11:27.149370 | orchestrator |  "osd_lvm_uuid": "6a818804-e2a7-5d8b-beae-a4acf44277a5" 2025-05-31 16:11:27.149395 | orchestrator |  }, 2025-05-31 16:11:27.150631 | orchestrator |  "sdc": { 2025-05-31 16:11:27.152163 | orchestrator |  "osd_lvm_uuid": "8b45f5b5-5599-560e-b955-f5f9e148b85f" 2025-05-31 16:11:27.152772 | orchestrator |  } 2025-05-31 16:11:27.154310 | orchestrator |  }, 2025-05-31 16:11:27.155632 | orchestrator |  "lvm_volumes": [ 2025-05-31 16:11:27.156866 | orchestrator |  { 2025-05-31 16:11:27.157420 | orchestrator |  "data": "osd-block-6a818804-e2a7-5d8b-beae-a4acf44277a5", 2025-05-31 16:11:27.157948 | orchestrator |  "data_vg": "ceph-6a818804-e2a7-5d8b-beae-a4acf44277a5" 2025-05-31 16:11:27.158533 | orchestrator |  }, 2025-05-31 16:11:27.159103 | orchestrator |  { 2025-05-31 16:11:27.159731 | orchestrator |  "data": "osd-block-8b45f5b5-5599-560e-b955-f5f9e148b85f", 2025-05-31 16:11:27.160621 | orchestrator |  "data_vg": "ceph-8b45f5b5-5599-560e-b955-f5f9e148b85f" 2025-05-31 16:11:27.161068 | orchestrator |  } 2025-05-31 16:11:27.161741 | orchestrator |  ] 2025-05-31 16:11:27.162500 | orchestrator |  } 2025-05-31 16:11:27.163236 | orchestrator | } 2025-05-31 16:11:27.163523 | orchestrator | 2025-05-31 16:11:27.164097 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-31 16:11:27.164119 | orchestrator | Saturday 31 May 2025 16:11:27 +0000 (0:00:00.279) 0:00:38.639 ********** 2025-05-31 16:11:28.214321 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-31 16:11:28.215120 | orchestrator | 2025-05-31 16:11:28.216245 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:11:28.217716 | orchestrator | 2025-05-31 16:11:28 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-31 16:11:28.217727 | orchestrator | 2025-05-31 16:11:28 | INFO  | Please wait and do not abort execution. 2025-05-31 16:11:28.219069 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-31 16:11:28.219833 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-31 16:11:28.220958 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-31 16:11:28.221663 | orchestrator | 2025-05-31 16:11:28.223201 | orchestrator | 2025-05-31 16:11:28.224105 | orchestrator | 2025-05-31 16:11:28.224945 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 16:11:28.225914 | orchestrator | Saturday 31 May 2025 16:11:28 +0000 (0:00:01.067) 0:00:39.707 ********** 2025-05-31 16:11:28.226584 | orchestrator | =============================================================================== 2025-05-31 16:11:28.227355 | orchestrator | Write configuration file ------------------------------------------------ 4.11s 2025-05-31 16:11:28.227967 | orchestrator | Add known links to the list of available block devices ------------------ 1.40s 2025-05-31 16:11:28.228748 | orchestrator | Add known partitions to the list of available block devices ------------- 1.12s 2025-05-31 16:11:28.229181 | orchestrator | Print configuration data ------------------------------------------------ 0.97s 2025-05-31 16:11:28.229867 | orchestrator | Add known partitions to the list of available block devices ------------- 0.87s 2025-05-31 16:11:28.230283 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.76s 2025-05-31 16:11:28.230916 | orchestrator | Add known links to the list of available block devices ------------------ 0.74s 2025-05-31 16:11:28.231540 | orchestrator | Get initial list of available block devices ----------------------------- 0.69s 2025-05-31 16:11:28.231921 | orchestrator | Print WAL devices ------------------------------------------------------- 0.69s 2025-05-31 16:11:28.232644 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.68s 2025-05-31 16:11:28.232939 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2025-05-31 16:11:28.233580 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2025-05-31 16:11:28.234112 | orchestrator | Add known links to the list of available block devices ------------------ 0.62s 2025-05-31 16:11:28.234638 | orchestrator | Add known partitions to the list of available block devices ------------- 0.61s 2025-05-31 16:11:28.235048 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.60s 2025-05-31 16:11:28.235608 | orchestrator | Add known partitions to the list of available block devices ------------- 0.60s 2025-05-31 16:11:28.236015 | orchestrator | Add known links to the list of available block devices ------------------ 0.58s 2025-05-31 16:11:28.236842 | orchestrator | Add known partitions to the list of available block devices ------------- 0.53s 2025-05-31 16:11:28.237197 | orchestrator | Generate lvm_volumes structure (block only) ----------------------------- 0.53s 2025-05-31 16:11:28.237522 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.52s 2025-05-31 16:11:40.301663 | orchestrator | 2025-05-31 16:11:40 | INFO  | Task cc8f405d-8fe5-4e36-ad43-6a112203957e is running in background. Output coming soon. 2025-05-31 16:12:01.977088 | orchestrator | 2025-05-31 16:11:54 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-05-31 16:12:01.977162 | orchestrator | 2025-05-31 16:11:54 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-05-31 16:12:01.977261 | orchestrator | 2025-05-31 16:11:54 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-05-31 16:12:01.977285 | orchestrator | 2025-05-31 16:11:55 | INFO  | Handling group overwrites in 99-overwrite 2025-05-31 16:12:01.977304 | orchestrator | 2025-05-31 16:11:55 | INFO  | Removing group frr:children from 60-generic 2025-05-31 16:12:01.977323 | orchestrator | 2025-05-31 16:11:55 | INFO  | Removing group storage:children from 50-kolla 2025-05-31 16:12:01.977342 | orchestrator | 2025-05-31 16:11:55 | INFO  | Removing group netbird:children from 50-infrastruture 2025-05-31 16:12:01.977359 | orchestrator | 2025-05-31 16:11:55 | INFO  | Removing group ceph-mds from 50-ceph 2025-05-31 16:12:01.977379 | orchestrator | 2025-05-31 16:11:55 | INFO  | Removing group ceph-rgw from 50-ceph 2025-05-31 16:12:01.977398 | orchestrator | 2025-05-31 16:11:55 | INFO  | Handling group overwrites in 20-roles 2025-05-31 16:12:01.977416 | orchestrator | 2025-05-31 16:11:55 | INFO  | Removing group k3s_node from 50-infrastruture 2025-05-31 16:12:01.977436 | orchestrator | 2025-05-31 16:11:55 | INFO  | File 20-netbox not found in /inventory.pre/ 2025-05-31 16:12:01.977454 | orchestrator | 2025-05-31 16:12:01 | INFO  | Writing /inventory/clustershell/ansible.yaml with clustershell groups 2025-05-31 16:12:03.316387 | orchestrator | 2025-05-31 16:12:03 | INFO  | Task 0c5cedfc-34a6-4aaa-9f55-d413df43e5c7 (ceph-create-lvm-devices) was prepared for execution. 2025-05-31 16:12:03.316455 | orchestrator | 2025-05-31 16:12:03 | INFO  | It takes a moment until task 0c5cedfc-34a6-4aaa-9f55-d413df43e5c7 (ceph-create-lvm-devices) has been started and output is visible here. 2025-05-31 16:12:05.882469 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-31 16:12:06.313462 | orchestrator | 2025-05-31 16:12:06.313652 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-31 16:12:06.314865 | orchestrator | 2025-05-31 16:12:06.316087 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-31 16:12:06.317039 | orchestrator | Saturday 31 May 2025 16:12:06 +0000 (0:00:00.374) 0:00:00.374 ********** 2025-05-31 16:12:06.516541 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-31 16:12:06.516774 | orchestrator | 2025-05-31 16:12:06.517279 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-31 16:12:06.517797 | orchestrator | Saturday 31 May 2025 16:12:06 +0000 (0:00:00.205) 0:00:00.580 ********** 2025-05-31 16:12:06.722371 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:12:06.724244 | orchestrator | 2025-05-31 16:12:06.724274 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:12:06.724287 | orchestrator | Saturday 31 May 2025 16:12:06 +0000 (0:00:00.204) 0:00:00.785 ********** 2025-05-31 16:12:07.283876 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-31 16:12:07.283961 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-31 16:12:07.283976 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-31 16:12:07.284044 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-31 16:12:07.284997 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-31 16:12:07.285979 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-31 16:12:07.287018 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-31 16:12:07.288281 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-31 16:12:07.289266 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-31 16:12:07.290052 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-31 16:12:07.291193 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-31 16:12:07.291573 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-31 16:12:07.292382 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-31 16:12:07.293489 | orchestrator | 2025-05-31 16:12:07.294056 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:12:07.294526 | orchestrator | Saturday 31 May 2025 16:12:07 +0000 (0:00:00.558) 0:00:01.343 ********** 2025-05-31 16:12:07.463816 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:07.463879 | orchestrator | 2025-05-31 16:12:07.463890 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:12:07.463900 | orchestrator | Saturday 31 May 2025 16:12:07 +0000 (0:00:00.182) 0:00:01.526 ********** 2025-05-31 16:12:07.638152 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:07.638952 | orchestrator | 2025-05-31 16:12:07.640306 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:12:07.641075 | orchestrator | Saturday 31 May 2025 16:12:07 +0000 (0:00:00.175) 0:00:01.701 ********** 2025-05-31 16:12:07.816802 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:07.816883 | orchestrator | 2025-05-31 16:12:07.816896 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:12:07.816910 | orchestrator | Saturday 31 May 2025 16:12:07 +0000 (0:00:00.176) 0:00:01.877 ********** 2025-05-31 16:12:07.984302 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:07.984385 | orchestrator | 2025-05-31 16:12:07.984399 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:12:07.984411 | orchestrator | Saturday 31 May 2025 16:12:07 +0000 (0:00:00.166) 0:00:02.043 ********** 2025-05-31 16:12:08.164064 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:08.165705 | orchestrator | 2025-05-31 16:12:08.166443 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:12:08.168881 | orchestrator | Saturday 31 May 2025 16:12:08 +0000 (0:00:00.182) 0:00:02.226 ********** 2025-05-31 16:12:08.350305 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:08.350388 | orchestrator | 2025-05-31 16:12:08.350403 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:12:08.350416 | orchestrator | Saturday 31 May 2025 16:12:08 +0000 (0:00:00.182) 0:00:02.409 ********** 2025-05-31 16:12:08.522281 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:08.522360 | orchestrator | 2025-05-31 16:12:08.523533 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:12:08.524388 | orchestrator | Saturday 31 May 2025 16:12:08 +0000 (0:00:00.175) 0:00:02.584 ********** 2025-05-31 16:12:08.697681 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:08.698000 | orchestrator | 2025-05-31 16:12:08.698405 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:12:08.699004 | orchestrator | Saturday 31 May 2025 16:12:08 +0000 (0:00:00.176) 0:00:02.760 ********** 2025-05-31 16:12:09.188290 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_934f4ed8-ee22-4056-bf57-9bb86fd501b4) 2025-05-31 16:12:09.188392 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_934f4ed8-ee22-4056-bf57-9bb86fd501b4) 2025-05-31 16:12:09.188803 | orchestrator | 2025-05-31 16:12:09.189668 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:12:09.189691 | orchestrator | Saturday 31 May 2025 16:12:09 +0000 (0:00:00.489) 0:00:03.250 ********** 2025-05-31 16:12:09.908822 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_dcaed5f8-6e74-4dcd-98b8-10fdfb9502ae) 2025-05-31 16:12:09.910492 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_dcaed5f8-6e74-4dcd-98b8-10fdfb9502ae) 2025-05-31 16:12:09.910879 | orchestrator | 2025-05-31 16:12:09.912092 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:12:09.912121 | orchestrator | Saturday 31 May 2025 16:12:09 +0000 (0:00:00.719) 0:00:03.969 ********** 2025-05-31 16:12:10.291754 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_35c478e1-8eef-4047-84ea-c6dce0624e72) 2025-05-31 16:12:10.293148 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_35c478e1-8eef-4047-84ea-c6dce0624e72) 2025-05-31 16:12:10.294408 | orchestrator | 2025-05-31 16:12:10.296044 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:12:10.296736 | orchestrator | Saturday 31 May 2025 16:12:10 +0000 (0:00:00.380) 0:00:04.349 ********** 2025-05-31 16:12:10.662246 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_433a7fb9-b2f1-41fe-ae71-e8a6b0aff1da) 2025-05-31 16:12:10.663992 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_433a7fb9-b2f1-41fe-ae71-e8a6b0aff1da) 2025-05-31 16:12:10.664075 | orchestrator | 2025-05-31 16:12:10.664853 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:12:10.665700 | orchestrator | Saturday 31 May 2025 16:12:10 +0000 (0:00:00.375) 0:00:04.724 ********** 2025-05-31 16:12:10.938003 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-31 16:12:10.938438 | orchestrator | 2025-05-31 16:12:10.939135 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:12:10.940972 | orchestrator | Saturday 31 May 2025 16:12:10 +0000 (0:00:00.276) 0:00:05.001 ********** 2025-05-31 16:12:11.388956 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-31 16:12:11.390350 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-31 16:12:11.390380 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-31 16:12:11.390519 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-31 16:12:11.391455 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-31 16:12:11.391905 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-31 16:12:11.392604 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-31 16:12:11.393227 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-31 16:12:11.393858 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-31 16:12:11.394439 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-31 16:12:11.395062 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-31 16:12:11.395684 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-31 16:12:11.396237 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-31 16:12:11.396741 | orchestrator | 2025-05-31 16:12:11.397278 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:12:11.397639 | orchestrator | Saturday 31 May 2025 16:12:11 +0000 (0:00:00.449) 0:00:05.450 ********** 2025-05-31 16:12:11.565513 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:11.565805 | orchestrator | 2025-05-31 16:12:11.566307 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:12:11.567052 | orchestrator | Saturday 31 May 2025 16:12:11 +0000 (0:00:00.177) 0:00:05.628 ********** 2025-05-31 16:12:11.742434 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:11.744513 | orchestrator | 2025-05-31 16:12:11.744972 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:12:11.745584 | orchestrator | Saturday 31 May 2025 16:12:11 +0000 (0:00:00.176) 0:00:05.805 ********** 2025-05-31 16:12:11.918630 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:11.918979 | orchestrator | 2025-05-31 16:12:11.919662 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:12:11.920325 | orchestrator | Saturday 31 May 2025 16:12:11 +0000 (0:00:00.176) 0:00:05.981 ********** 2025-05-31 16:12:12.090352 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:12.090691 | orchestrator | 2025-05-31 16:12:12.091144 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:12:12.091701 | orchestrator | Saturday 31 May 2025 16:12:12 +0000 (0:00:00.171) 0:00:06.153 ********** 2025-05-31 16:12:12.467756 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:12.468766 | orchestrator | 2025-05-31 16:12:12.468835 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:12:12.469419 | orchestrator | Saturday 31 May 2025 16:12:12 +0000 (0:00:00.376) 0:00:06.530 ********** 2025-05-31 16:12:12.640548 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:12.640765 | orchestrator | 2025-05-31 16:12:12.641260 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:12:12.643009 | orchestrator | Saturday 31 May 2025 16:12:12 +0000 (0:00:00.174) 0:00:06.704 ********** 2025-05-31 16:12:12.804655 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:12.805220 | orchestrator | 2025-05-31 16:12:12.807571 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:12:12.807597 | orchestrator | Saturday 31 May 2025 16:12:12 +0000 (0:00:00.163) 0:00:06.867 ********** 2025-05-31 16:12:12.969604 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:12.969804 | orchestrator | 2025-05-31 16:12:12.970633 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:12:12.971539 | orchestrator | Saturday 31 May 2025 16:12:12 +0000 (0:00:00.165) 0:00:07.033 ********** 2025-05-31 16:12:13.537964 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-31 16:12:13.538413 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-31 16:12:13.539731 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-31 16:12:13.540509 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-31 16:12:13.541390 | orchestrator | 2025-05-31 16:12:13.542071 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:12:13.542737 | orchestrator | Saturday 31 May 2025 16:12:13 +0000 (0:00:00.566) 0:00:07.600 ********** 2025-05-31 16:12:13.715102 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:13.715346 | orchestrator | 2025-05-31 16:12:13.715627 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:12:13.716006 | orchestrator | Saturday 31 May 2025 16:12:13 +0000 (0:00:00.178) 0:00:07.778 ********** 2025-05-31 16:12:13.881692 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:13.881776 | orchestrator | 2025-05-31 16:12:13.881952 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:12:13.882627 | orchestrator | Saturday 31 May 2025 16:12:13 +0000 (0:00:00.165) 0:00:07.944 ********** 2025-05-31 16:12:14.045464 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:14.045719 | orchestrator | 2025-05-31 16:12:14.046435 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:12:14.047050 | orchestrator | Saturday 31 May 2025 16:12:14 +0000 (0:00:00.163) 0:00:08.108 ********** 2025-05-31 16:12:14.224069 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:14.224358 | orchestrator | 2025-05-31 16:12:14.225011 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-31 16:12:14.225446 | orchestrator | Saturday 31 May 2025 16:12:14 +0000 (0:00:00.177) 0:00:08.286 ********** 2025-05-31 16:12:14.344515 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:14.344775 | orchestrator | 2025-05-31 16:12:14.345296 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-31 16:12:14.345983 | orchestrator | Saturday 31 May 2025 16:12:14 +0000 (0:00:00.120) 0:00:08.406 ********** 2025-05-31 16:12:14.523688 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e43a14fa-64bd-59a3-8350-23173f11027f'}}) 2025-05-31 16:12:14.523877 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '92adfeec-5c5c-5208-b88e-9a01a071247e'}}) 2025-05-31 16:12:14.524351 | orchestrator | 2025-05-31 16:12:14.525823 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-31 16:12:14.526402 | orchestrator | Saturday 31 May 2025 16:12:14 +0000 (0:00:00.179) 0:00:08.586 ********** 2025-05-31 16:12:16.661977 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-e43a14fa-64bd-59a3-8350-23173f11027f', 'data_vg': 'ceph-e43a14fa-64bd-59a3-8350-23173f11027f'}) 2025-05-31 16:12:16.662455 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-92adfeec-5c5c-5208-b88e-9a01a071247e', 'data_vg': 'ceph-92adfeec-5c5c-5208-b88e-9a01a071247e'}) 2025-05-31 16:12:16.663444 | orchestrator | 2025-05-31 16:12:16.664839 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-31 16:12:16.665302 | orchestrator | Saturday 31 May 2025 16:12:16 +0000 (0:00:02.137) 0:00:10.723 ********** 2025-05-31 16:12:16.808627 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e43a14fa-64bd-59a3-8350-23173f11027f', 'data_vg': 'ceph-e43a14fa-64bd-59a3-8350-23173f11027f'})  2025-05-31 16:12:16.808999 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-92adfeec-5c5c-5208-b88e-9a01a071247e', 'data_vg': 'ceph-92adfeec-5c5c-5208-b88e-9a01a071247e'})  2025-05-31 16:12:16.809665 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:16.812296 | orchestrator | 2025-05-31 16:12:16.812348 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-31 16:12:16.812443 | orchestrator | Saturday 31 May 2025 16:12:16 +0000 (0:00:00.147) 0:00:10.871 ********** 2025-05-31 16:12:18.245954 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-e43a14fa-64bd-59a3-8350-23173f11027f', 'data_vg': 'ceph-e43a14fa-64bd-59a3-8350-23173f11027f'}) 2025-05-31 16:12:18.247324 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-92adfeec-5c5c-5208-b88e-9a01a071247e', 'data_vg': 'ceph-92adfeec-5c5c-5208-b88e-9a01a071247e'}) 2025-05-31 16:12:18.249551 | orchestrator | 2025-05-31 16:12:18.251765 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-31 16:12:18.252526 | orchestrator | Saturday 31 May 2025 16:12:18 +0000 (0:00:01.435) 0:00:12.306 ********** 2025-05-31 16:12:18.389611 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e43a14fa-64bd-59a3-8350-23173f11027f', 'data_vg': 'ceph-e43a14fa-64bd-59a3-8350-23173f11027f'})  2025-05-31 16:12:18.390363 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-92adfeec-5c5c-5208-b88e-9a01a071247e', 'data_vg': 'ceph-92adfeec-5c5c-5208-b88e-9a01a071247e'})  2025-05-31 16:12:18.391312 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:18.393481 | orchestrator | 2025-05-31 16:12:18.393556 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-31 16:12:18.393574 | orchestrator | Saturday 31 May 2025 16:12:18 +0000 (0:00:00.145) 0:00:12.452 ********** 2025-05-31 16:12:18.516316 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:18.516491 | orchestrator | 2025-05-31 16:12:18.516512 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-31 16:12:18.517573 | orchestrator | Saturday 31 May 2025 16:12:18 +0000 (0:00:00.127) 0:00:12.579 ********** 2025-05-31 16:12:18.681267 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e43a14fa-64bd-59a3-8350-23173f11027f', 'data_vg': 'ceph-e43a14fa-64bd-59a3-8350-23173f11027f'})  2025-05-31 16:12:18.681762 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-92adfeec-5c5c-5208-b88e-9a01a071247e', 'data_vg': 'ceph-92adfeec-5c5c-5208-b88e-9a01a071247e'})  2025-05-31 16:12:18.682208 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:18.684893 | orchestrator | 2025-05-31 16:12:18.684916 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-31 16:12:18.684923 | orchestrator | Saturday 31 May 2025 16:12:18 +0000 (0:00:00.163) 0:00:12.743 ********** 2025-05-31 16:12:18.814696 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:18.814882 | orchestrator | 2025-05-31 16:12:18.815575 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-31 16:12:18.817024 | orchestrator | Saturday 31 May 2025 16:12:18 +0000 (0:00:00.132) 0:00:12.875 ********** 2025-05-31 16:12:18.970179 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e43a14fa-64bd-59a3-8350-23173f11027f', 'data_vg': 'ceph-e43a14fa-64bd-59a3-8350-23173f11027f'})  2025-05-31 16:12:18.970399 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-92adfeec-5c5c-5208-b88e-9a01a071247e', 'data_vg': 'ceph-92adfeec-5c5c-5208-b88e-9a01a071247e'})  2025-05-31 16:12:18.971035 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:18.972488 | orchestrator | 2025-05-31 16:12:18.973490 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-31 16:12:18.975817 | orchestrator | Saturday 31 May 2025 16:12:18 +0000 (0:00:00.157) 0:00:13.032 ********** 2025-05-31 16:12:19.099148 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:19.100412 | orchestrator | 2025-05-31 16:12:19.102498 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-31 16:12:19.105776 | orchestrator | Saturday 31 May 2025 16:12:19 +0000 (0:00:00.129) 0:00:13.161 ********** 2025-05-31 16:12:19.382693 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e43a14fa-64bd-59a3-8350-23173f11027f', 'data_vg': 'ceph-e43a14fa-64bd-59a3-8350-23173f11027f'})  2025-05-31 16:12:19.383084 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-92adfeec-5c5c-5208-b88e-9a01a071247e', 'data_vg': 'ceph-92adfeec-5c5c-5208-b88e-9a01a071247e'})  2025-05-31 16:12:19.385148 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:19.385513 | orchestrator | 2025-05-31 16:12:19.386243 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-31 16:12:19.386805 | orchestrator | Saturday 31 May 2025 16:12:19 +0000 (0:00:00.280) 0:00:13.442 ********** 2025-05-31 16:12:19.516060 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:12:19.516751 | orchestrator | 2025-05-31 16:12:19.517672 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-31 16:12:19.518092 | orchestrator | Saturday 31 May 2025 16:12:19 +0000 (0:00:00.136) 0:00:13.578 ********** 2025-05-31 16:12:19.675656 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e43a14fa-64bd-59a3-8350-23173f11027f', 'data_vg': 'ceph-e43a14fa-64bd-59a3-8350-23173f11027f'})  2025-05-31 16:12:19.678152 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-92adfeec-5c5c-5208-b88e-9a01a071247e', 'data_vg': 'ceph-92adfeec-5c5c-5208-b88e-9a01a071247e'})  2025-05-31 16:12:19.678258 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:19.678274 | orchestrator | 2025-05-31 16:12:19.678728 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-31 16:12:19.679575 | orchestrator | Saturday 31 May 2025 16:12:19 +0000 (0:00:00.158) 0:00:13.736 ********** 2025-05-31 16:12:19.835868 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e43a14fa-64bd-59a3-8350-23173f11027f', 'data_vg': 'ceph-e43a14fa-64bd-59a3-8350-23173f11027f'})  2025-05-31 16:12:19.836462 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-92adfeec-5c5c-5208-b88e-9a01a071247e', 'data_vg': 'ceph-92adfeec-5c5c-5208-b88e-9a01a071247e'})  2025-05-31 16:12:19.837416 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:19.838272 | orchestrator | 2025-05-31 16:12:19.838952 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-31 16:12:19.840676 | orchestrator | Saturday 31 May 2025 16:12:19 +0000 (0:00:00.160) 0:00:13.897 ********** 2025-05-31 16:12:20.000243 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e43a14fa-64bd-59a3-8350-23173f11027f', 'data_vg': 'ceph-e43a14fa-64bd-59a3-8350-23173f11027f'})  2025-05-31 16:12:20.000994 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-92adfeec-5c5c-5208-b88e-9a01a071247e', 'data_vg': 'ceph-92adfeec-5c5c-5208-b88e-9a01a071247e'})  2025-05-31 16:12:20.002323 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:20.003439 | orchestrator | 2025-05-31 16:12:20.003816 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-31 16:12:20.004718 | orchestrator | Saturday 31 May 2025 16:12:19 +0000 (0:00:00.165) 0:00:14.062 ********** 2025-05-31 16:12:20.146282 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:20.146940 | orchestrator | 2025-05-31 16:12:20.147355 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-31 16:12:20.148164 | orchestrator | Saturday 31 May 2025 16:12:20 +0000 (0:00:00.145) 0:00:14.208 ********** 2025-05-31 16:12:20.284329 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:20.284419 | orchestrator | 2025-05-31 16:12:20.285248 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-31 16:12:20.285747 | orchestrator | Saturday 31 May 2025 16:12:20 +0000 (0:00:00.137) 0:00:14.345 ********** 2025-05-31 16:12:20.416411 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:20.417121 | orchestrator | 2025-05-31 16:12:20.418345 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-31 16:12:20.419249 | orchestrator | Saturday 31 May 2025 16:12:20 +0000 (0:00:00.133) 0:00:14.479 ********** 2025-05-31 16:12:20.551236 | orchestrator | ok: [testbed-node-3] => { 2025-05-31 16:12:20.552027 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-31 16:12:20.552674 | orchestrator | } 2025-05-31 16:12:20.553822 | orchestrator | 2025-05-31 16:12:20.554382 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-31 16:12:20.554806 | orchestrator | Saturday 31 May 2025 16:12:20 +0000 (0:00:00.134) 0:00:14.613 ********** 2025-05-31 16:12:20.681365 | orchestrator | ok: [testbed-node-3] => { 2025-05-31 16:12:20.681818 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-31 16:12:20.682457 | orchestrator | } 2025-05-31 16:12:20.683545 | orchestrator | 2025-05-31 16:12:20.685647 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-31 16:12:20.685993 | orchestrator | Saturday 31 May 2025 16:12:20 +0000 (0:00:00.130) 0:00:14.743 ********** 2025-05-31 16:12:20.820729 | orchestrator | ok: [testbed-node-3] => { 2025-05-31 16:12:20.821285 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-31 16:12:20.821809 | orchestrator | } 2025-05-31 16:12:20.822544 | orchestrator | 2025-05-31 16:12:20.823341 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-31 16:12:20.824018 | orchestrator | Saturday 31 May 2025 16:12:20 +0000 (0:00:00.138) 0:00:14.881 ********** 2025-05-31 16:12:21.616862 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:12:21.617093 | orchestrator | 2025-05-31 16:12:21.617874 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-31 16:12:21.618404 | orchestrator | Saturday 31 May 2025 16:12:21 +0000 (0:00:00.797) 0:00:15.679 ********** 2025-05-31 16:12:22.116285 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:12:22.116449 | orchestrator | 2025-05-31 16:12:22.117279 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-31 16:12:22.117615 | orchestrator | Saturday 31 May 2025 16:12:22 +0000 (0:00:00.497) 0:00:16.176 ********** 2025-05-31 16:12:22.679103 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:12:22.679350 | orchestrator | 2025-05-31 16:12:22.679976 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-31 16:12:22.680432 | orchestrator | Saturday 31 May 2025 16:12:22 +0000 (0:00:00.564) 0:00:16.741 ********** 2025-05-31 16:12:22.820847 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:12:22.821506 | orchestrator | 2025-05-31 16:12:22.822401 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-31 16:12:22.823262 | orchestrator | Saturday 31 May 2025 16:12:22 +0000 (0:00:00.142) 0:00:16.883 ********** 2025-05-31 16:12:22.925308 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:22.925764 | orchestrator | 2025-05-31 16:12:22.926631 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-31 16:12:22.927081 | orchestrator | Saturday 31 May 2025 16:12:22 +0000 (0:00:00.103) 0:00:16.987 ********** 2025-05-31 16:12:23.023756 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:23.024262 | orchestrator | 2025-05-31 16:12:23.025588 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-31 16:12:23.026965 | orchestrator | Saturday 31 May 2025 16:12:23 +0000 (0:00:00.099) 0:00:17.086 ********** 2025-05-31 16:12:23.160467 | orchestrator | ok: [testbed-node-3] => { 2025-05-31 16:12:23.160846 | orchestrator |  "vgs_report": { 2025-05-31 16:12:23.161259 | orchestrator |  "vg": [] 2025-05-31 16:12:23.162505 | orchestrator |  } 2025-05-31 16:12:23.164651 | orchestrator | } 2025-05-31 16:12:23.164688 | orchestrator | 2025-05-31 16:12:23.164694 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-31 16:12:23.164700 | orchestrator | Saturday 31 May 2025 16:12:23 +0000 (0:00:00.136) 0:00:17.223 ********** 2025-05-31 16:12:23.293650 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:23.294393 | orchestrator | 2025-05-31 16:12:23.295394 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-31 16:12:23.297368 | orchestrator | Saturday 31 May 2025 16:12:23 +0000 (0:00:00.133) 0:00:17.356 ********** 2025-05-31 16:12:23.435285 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:23.435390 | orchestrator | 2025-05-31 16:12:23.435900 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-31 16:12:23.435926 | orchestrator | Saturday 31 May 2025 16:12:23 +0000 (0:00:00.135) 0:00:17.491 ********** 2025-05-31 16:12:23.569389 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:23.574453 | orchestrator | 2025-05-31 16:12:23.576087 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-31 16:12:23.576691 | orchestrator | Saturday 31 May 2025 16:12:23 +0000 (0:00:00.140) 0:00:17.631 ********** 2025-05-31 16:12:23.707325 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:23.708218 | orchestrator | 2025-05-31 16:12:23.708694 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-31 16:12:23.709338 | orchestrator | Saturday 31 May 2025 16:12:23 +0000 (0:00:00.138) 0:00:17.770 ********** 2025-05-31 16:12:24.003588 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:24.004093 | orchestrator | 2025-05-31 16:12:24.005307 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-31 16:12:24.006886 | orchestrator | Saturday 31 May 2025 16:12:23 +0000 (0:00:00.294) 0:00:18.064 ********** 2025-05-31 16:12:24.146179 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:24.147154 | orchestrator | 2025-05-31 16:12:24.147249 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-31 16:12:24.147353 | orchestrator | Saturday 31 May 2025 16:12:24 +0000 (0:00:00.143) 0:00:18.208 ********** 2025-05-31 16:12:24.286429 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:24.286875 | orchestrator | 2025-05-31 16:12:24.287504 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-31 16:12:24.288079 | orchestrator | Saturday 31 May 2025 16:12:24 +0000 (0:00:00.140) 0:00:18.349 ********** 2025-05-31 16:12:24.421920 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:24.422248 | orchestrator | 2025-05-31 16:12:24.422704 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-31 16:12:24.423549 | orchestrator | Saturday 31 May 2025 16:12:24 +0000 (0:00:00.134) 0:00:18.484 ********** 2025-05-31 16:12:24.565079 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:24.565504 | orchestrator | 2025-05-31 16:12:24.566872 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-31 16:12:24.569021 | orchestrator | Saturday 31 May 2025 16:12:24 +0000 (0:00:00.142) 0:00:18.626 ********** 2025-05-31 16:12:24.718372 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:24.718552 | orchestrator | 2025-05-31 16:12:24.718674 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-31 16:12:24.719376 | orchestrator | Saturday 31 May 2025 16:12:24 +0000 (0:00:00.153) 0:00:18.779 ********** 2025-05-31 16:12:24.847798 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:24.848181 | orchestrator | 2025-05-31 16:12:24.848468 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-31 16:12:24.849135 | orchestrator | Saturday 31 May 2025 16:12:24 +0000 (0:00:00.129) 0:00:18.909 ********** 2025-05-31 16:12:24.981277 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:24.981573 | orchestrator | 2025-05-31 16:12:24.982102 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-31 16:12:24.983528 | orchestrator | Saturday 31 May 2025 16:12:24 +0000 (0:00:00.132) 0:00:19.042 ********** 2025-05-31 16:12:25.128746 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:25.128949 | orchestrator | 2025-05-31 16:12:25.129348 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-31 16:12:25.129864 | orchestrator | Saturday 31 May 2025 16:12:25 +0000 (0:00:00.148) 0:00:19.191 ********** 2025-05-31 16:12:25.256722 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:25.256930 | orchestrator | 2025-05-31 16:12:25.257843 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-31 16:12:25.258713 | orchestrator | Saturday 31 May 2025 16:12:25 +0000 (0:00:00.126) 0:00:19.318 ********** 2025-05-31 16:12:25.410512 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e43a14fa-64bd-59a3-8350-23173f11027f', 'data_vg': 'ceph-e43a14fa-64bd-59a3-8350-23173f11027f'})  2025-05-31 16:12:25.410885 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-92adfeec-5c5c-5208-b88e-9a01a071247e', 'data_vg': 'ceph-92adfeec-5c5c-5208-b88e-9a01a071247e'})  2025-05-31 16:12:25.411455 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:25.411932 | orchestrator | 2025-05-31 16:12:25.412405 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-31 16:12:25.413561 | orchestrator | Saturday 31 May 2025 16:12:25 +0000 (0:00:00.154) 0:00:19.472 ********** 2025-05-31 16:12:25.551548 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e43a14fa-64bd-59a3-8350-23173f11027f', 'data_vg': 'ceph-e43a14fa-64bd-59a3-8350-23173f11027f'})  2025-05-31 16:12:25.551649 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-92adfeec-5c5c-5208-b88e-9a01a071247e', 'data_vg': 'ceph-92adfeec-5c5c-5208-b88e-9a01a071247e'})  2025-05-31 16:12:25.552281 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:25.553060 | orchestrator | 2025-05-31 16:12:25.553616 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-31 16:12:25.555420 | orchestrator | Saturday 31 May 2025 16:12:25 +0000 (0:00:00.140) 0:00:19.613 ********** 2025-05-31 16:12:25.883109 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e43a14fa-64bd-59a3-8350-23173f11027f', 'data_vg': 'ceph-e43a14fa-64bd-59a3-8350-23173f11027f'})  2025-05-31 16:12:25.883893 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-92adfeec-5c5c-5208-b88e-9a01a071247e', 'data_vg': 'ceph-92adfeec-5c5c-5208-b88e-9a01a071247e'})  2025-05-31 16:12:25.887079 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:25.887100 | orchestrator | 2025-05-31 16:12:25.887264 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-31 16:12:25.888204 | orchestrator | Saturday 31 May 2025 16:12:25 +0000 (0:00:00.331) 0:00:19.944 ********** 2025-05-31 16:12:26.032762 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e43a14fa-64bd-59a3-8350-23173f11027f', 'data_vg': 'ceph-e43a14fa-64bd-59a3-8350-23173f11027f'})  2025-05-31 16:12:26.033446 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-92adfeec-5c5c-5208-b88e-9a01a071247e', 'data_vg': 'ceph-92adfeec-5c5c-5208-b88e-9a01a071247e'})  2025-05-31 16:12:26.034079 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:26.034615 | orchestrator | 2025-05-31 16:12:26.035163 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-31 16:12:26.035713 | orchestrator | Saturday 31 May 2025 16:12:26 +0000 (0:00:00.150) 0:00:20.095 ********** 2025-05-31 16:12:26.207423 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e43a14fa-64bd-59a3-8350-23173f11027f', 'data_vg': 'ceph-e43a14fa-64bd-59a3-8350-23173f11027f'})  2025-05-31 16:12:26.207520 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-92adfeec-5c5c-5208-b88e-9a01a071247e', 'data_vg': 'ceph-92adfeec-5c5c-5208-b88e-9a01a071247e'})  2025-05-31 16:12:26.208031 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:26.208556 | orchestrator | 2025-05-31 16:12:26.208915 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-31 16:12:26.209512 | orchestrator | Saturday 31 May 2025 16:12:26 +0000 (0:00:00.174) 0:00:20.269 ********** 2025-05-31 16:12:26.356959 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e43a14fa-64bd-59a3-8350-23173f11027f', 'data_vg': 'ceph-e43a14fa-64bd-59a3-8350-23173f11027f'})  2025-05-31 16:12:26.357277 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-92adfeec-5c5c-5208-b88e-9a01a071247e', 'data_vg': 'ceph-92adfeec-5c5c-5208-b88e-9a01a071247e'})  2025-05-31 16:12:26.357865 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:26.358673 | orchestrator | 2025-05-31 16:12:26.359310 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-31 16:12:26.359957 | orchestrator | Saturday 31 May 2025 16:12:26 +0000 (0:00:00.149) 0:00:20.419 ********** 2025-05-31 16:12:26.519839 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e43a14fa-64bd-59a3-8350-23173f11027f', 'data_vg': 'ceph-e43a14fa-64bd-59a3-8350-23173f11027f'})  2025-05-31 16:12:26.520449 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-92adfeec-5c5c-5208-b88e-9a01a071247e', 'data_vg': 'ceph-92adfeec-5c5c-5208-b88e-9a01a071247e'})  2025-05-31 16:12:26.521368 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:26.522340 | orchestrator | 2025-05-31 16:12:26.522800 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-31 16:12:26.523741 | orchestrator | Saturday 31 May 2025 16:12:26 +0000 (0:00:00.159) 0:00:20.578 ********** 2025-05-31 16:12:26.678566 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e43a14fa-64bd-59a3-8350-23173f11027f', 'data_vg': 'ceph-e43a14fa-64bd-59a3-8350-23173f11027f'})  2025-05-31 16:12:26.678660 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-92adfeec-5c5c-5208-b88e-9a01a071247e', 'data_vg': 'ceph-92adfeec-5c5c-5208-b88e-9a01a071247e'})  2025-05-31 16:12:26.679221 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:26.679628 | orchestrator | 2025-05-31 16:12:26.679652 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-31 16:12:26.681001 | orchestrator | Saturday 31 May 2025 16:12:26 +0000 (0:00:00.160) 0:00:20.739 ********** 2025-05-31 16:12:27.249451 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:12:27.249836 | orchestrator | 2025-05-31 16:12:27.250321 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-31 16:12:27.250915 | orchestrator | Saturday 31 May 2025 16:12:27 +0000 (0:00:00.571) 0:00:21.310 ********** 2025-05-31 16:12:27.798429 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:12:27.798599 | orchestrator | 2025-05-31 16:12:27.799315 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-31 16:12:27.799814 | orchestrator | Saturday 31 May 2025 16:12:27 +0000 (0:00:00.549) 0:00:21.860 ********** 2025-05-31 16:12:27.935651 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:12:27.935803 | orchestrator | 2025-05-31 16:12:27.936488 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-31 16:12:27.937156 | orchestrator | Saturday 31 May 2025 16:12:27 +0000 (0:00:00.137) 0:00:21.997 ********** 2025-05-31 16:12:28.125238 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-92adfeec-5c5c-5208-b88e-9a01a071247e', 'vg_name': 'ceph-92adfeec-5c5c-5208-b88e-9a01a071247e'}) 2025-05-31 16:12:28.125793 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-e43a14fa-64bd-59a3-8350-23173f11027f', 'vg_name': 'ceph-e43a14fa-64bd-59a3-8350-23173f11027f'}) 2025-05-31 16:12:28.126314 | orchestrator | 2025-05-31 16:12:28.126965 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-31 16:12:28.127582 | orchestrator | Saturday 31 May 2025 16:12:28 +0000 (0:00:00.189) 0:00:22.187 ********** 2025-05-31 16:12:28.286479 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e43a14fa-64bd-59a3-8350-23173f11027f', 'data_vg': 'ceph-e43a14fa-64bd-59a3-8350-23173f11027f'})  2025-05-31 16:12:28.286696 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-92adfeec-5c5c-5208-b88e-9a01a071247e', 'data_vg': 'ceph-92adfeec-5c5c-5208-b88e-9a01a071247e'})  2025-05-31 16:12:28.287138 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:28.287622 | orchestrator | 2025-05-31 16:12:28.288168 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-31 16:12:28.288890 | orchestrator | Saturday 31 May 2025 16:12:28 +0000 (0:00:00.161) 0:00:22.349 ********** 2025-05-31 16:12:28.596118 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e43a14fa-64bd-59a3-8350-23173f11027f', 'data_vg': 'ceph-e43a14fa-64bd-59a3-8350-23173f11027f'})  2025-05-31 16:12:28.596343 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-92adfeec-5c5c-5208-b88e-9a01a071247e', 'data_vg': 'ceph-92adfeec-5c5c-5208-b88e-9a01a071247e'})  2025-05-31 16:12:28.596796 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:28.597881 | orchestrator | 2025-05-31 16:12:28.598321 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-31 16:12:28.598676 | orchestrator | Saturday 31 May 2025 16:12:28 +0000 (0:00:00.309) 0:00:22.658 ********** 2025-05-31 16:12:28.768260 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e43a14fa-64bd-59a3-8350-23173f11027f', 'data_vg': 'ceph-e43a14fa-64bd-59a3-8350-23173f11027f'})  2025-05-31 16:12:28.768813 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-92adfeec-5c5c-5208-b88e-9a01a071247e', 'data_vg': 'ceph-92adfeec-5c5c-5208-b88e-9a01a071247e'})  2025-05-31 16:12:28.769687 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:12:28.770420 | orchestrator | 2025-05-31 16:12:28.772018 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-31 16:12:28.772043 | orchestrator | Saturday 31 May 2025 16:12:28 +0000 (0:00:00.172) 0:00:22.831 ********** 2025-05-31 16:12:29.404810 | orchestrator | ok: [testbed-node-3] => { 2025-05-31 16:12:29.405492 | orchestrator |  "lvm_report": { 2025-05-31 16:12:29.405589 | orchestrator |  "lv": [ 2025-05-31 16:12:29.406394 | orchestrator |  { 2025-05-31 16:12:29.407229 | orchestrator |  "lv_name": "osd-block-92adfeec-5c5c-5208-b88e-9a01a071247e", 2025-05-31 16:12:29.407853 | orchestrator |  "vg_name": "ceph-92adfeec-5c5c-5208-b88e-9a01a071247e" 2025-05-31 16:12:29.408425 | orchestrator |  }, 2025-05-31 16:12:29.408856 | orchestrator |  { 2025-05-31 16:12:29.409972 | orchestrator |  "lv_name": "osd-block-e43a14fa-64bd-59a3-8350-23173f11027f", 2025-05-31 16:12:29.410768 | orchestrator |  "vg_name": "ceph-e43a14fa-64bd-59a3-8350-23173f11027f" 2025-05-31 16:12:29.411487 | orchestrator |  } 2025-05-31 16:12:29.412521 | orchestrator |  ], 2025-05-31 16:12:29.412967 | orchestrator |  "pv": [ 2025-05-31 16:12:29.413479 | orchestrator |  { 2025-05-31 16:12:29.414281 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-31 16:12:29.414664 | orchestrator |  "vg_name": "ceph-e43a14fa-64bd-59a3-8350-23173f11027f" 2025-05-31 16:12:29.415341 | orchestrator |  }, 2025-05-31 16:12:29.415735 | orchestrator |  { 2025-05-31 16:12:29.416140 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-31 16:12:29.416697 | orchestrator |  "vg_name": "ceph-92adfeec-5c5c-5208-b88e-9a01a071247e" 2025-05-31 16:12:29.417075 | orchestrator |  } 2025-05-31 16:12:29.417688 | orchestrator |  ] 2025-05-31 16:12:29.418009 | orchestrator |  } 2025-05-31 16:12:29.418466 | orchestrator | } 2025-05-31 16:12:29.419067 | orchestrator | 2025-05-31 16:12:29.419438 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-31 16:12:29.419711 | orchestrator | 2025-05-31 16:12:29.420281 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-31 16:12:29.420511 | orchestrator | Saturday 31 May 2025 16:12:29 +0000 (0:00:00.635) 0:00:23.466 ********** 2025-05-31 16:12:29.956612 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-31 16:12:29.957162 | orchestrator | 2025-05-31 16:12:29.957844 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-31 16:12:29.958726 | orchestrator | Saturday 31 May 2025 16:12:29 +0000 (0:00:00.552) 0:00:24.019 ********** 2025-05-31 16:12:30.184614 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:12:30.184792 | orchestrator | 2025-05-31 16:12:30.185427 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:12:30.186096 | orchestrator | Saturday 31 May 2025 16:12:30 +0000 (0:00:00.226) 0:00:24.246 ********** 2025-05-31 16:12:30.613095 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-31 16:12:30.614108 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-31 16:12:30.615321 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-31 16:12:30.615968 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-31 16:12:30.617301 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-31 16:12:30.617997 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-31 16:12:30.618691 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-31 16:12:30.619268 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-31 16:12:30.620377 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-31 16:12:30.620618 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-31 16:12:30.621128 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-31 16:12:30.621854 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-31 16:12:30.622376 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-31 16:12:30.623170 | orchestrator | 2025-05-31 16:12:30.623590 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:12:30.623865 | orchestrator | Saturday 31 May 2025 16:12:30 +0000 (0:00:00.429) 0:00:24.675 ********** 2025-05-31 16:12:30.806577 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:30.806757 | orchestrator | 2025-05-31 16:12:30.807590 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:12:30.807922 | orchestrator | Saturday 31 May 2025 16:12:30 +0000 (0:00:00.193) 0:00:24.869 ********** 2025-05-31 16:12:30.986379 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:30.986598 | orchestrator | 2025-05-31 16:12:30.987172 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:12:30.988081 | orchestrator | Saturday 31 May 2025 16:12:30 +0000 (0:00:00.179) 0:00:25.048 ********** 2025-05-31 16:12:31.178310 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:31.178494 | orchestrator | 2025-05-31 16:12:31.179255 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:12:31.179883 | orchestrator | Saturday 31 May 2025 16:12:31 +0000 (0:00:00.192) 0:00:25.241 ********** 2025-05-31 16:12:31.364993 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:31.365222 | orchestrator | 2025-05-31 16:12:31.365523 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:12:31.366937 | orchestrator | Saturday 31 May 2025 16:12:31 +0000 (0:00:00.186) 0:00:25.427 ********** 2025-05-31 16:12:31.558074 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:31.558168 | orchestrator | 2025-05-31 16:12:31.558953 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:12:31.560239 | orchestrator | Saturday 31 May 2025 16:12:31 +0000 (0:00:00.192) 0:00:25.619 ********** 2025-05-31 16:12:31.743529 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:31.743703 | orchestrator | 2025-05-31 16:12:31.744231 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:12:31.744804 | orchestrator | Saturday 31 May 2025 16:12:31 +0000 (0:00:00.186) 0:00:25.805 ********** 2025-05-31 16:12:31.933162 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:31.933655 | orchestrator | 2025-05-31 16:12:31.934354 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:12:31.935080 | orchestrator | Saturday 31 May 2025 16:12:31 +0000 (0:00:00.189) 0:00:25.995 ********** 2025-05-31 16:12:32.455594 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:32.456363 | orchestrator | 2025-05-31 16:12:32.456973 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:12:32.459052 | orchestrator | Saturday 31 May 2025 16:12:32 +0000 (0:00:00.521) 0:00:26.516 ********** 2025-05-31 16:12:32.842280 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ec58de5b-e589-4299-a3da-23bfa2e4200b) 2025-05-31 16:12:32.844235 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ec58de5b-e589-4299-a3da-23bfa2e4200b) 2025-05-31 16:12:32.845331 | orchestrator | 2025-05-31 16:12:32.846285 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:12:32.846900 | orchestrator | Saturday 31 May 2025 16:12:32 +0000 (0:00:00.388) 0:00:26.904 ********** 2025-05-31 16:12:33.257059 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ddb9130a-4326-47f1-9e34-5e5625e80e81) 2025-05-31 16:12:33.257702 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ddb9130a-4326-47f1-9e34-5e5625e80e81) 2025-05-31 16:12:33.258879 | orchestrator | 2025-05-31 16:12:33.259664 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:12:33.260269 | orchestrator | Saturday 31 May 2025 16:12:33 +0000 (0:00:00.411) 0:00:27.316 ********** 2025-05-31 16:12:33.659672 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5b4bcfff-b004-43b2-aa93-003eb1863ed5) 2025-05-31 16:12:33.660541 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5b4bcfff-b004-43b2-aa93-003eb1863ed5) 2025-05-31 16:12:33.662972 | orchestrator | 2025-05-31 16:12:33.663816 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:12:33.664330 | orchestrator | Saturday 31 May 2025 16:12:33 +0000 (0:00:00.403) 0:00:27.720 ********** 2025-05-31 16:12:34.067728 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f41ad415-f570-4f0b-8f25-7db49ff0cbfa) 2025-05-31 16:12:34.069249 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f41ad415-f570-4f0b-8f25-7db49ff0cbfa) 2025-05-31 16:12:34.070378 | orchestrator | 2025-05-31 16:12:34.072721 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:12:34.072754 | orchestrator | Saturday 31 May 2025 16:12:34 +0000 (0:00:00.409) 0:00:28.130 ********** 2025-05-31 16:12:34.414242 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-31 16:12:34.414422 | orchestrator | 2025-05-31 16:12:34.414934 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:12:34.415632 | orchestrator | Saturday 31 May 2025 16:12:34 +0000 (0:00:00.346) 0:00:28.476 ********** 2025-05-31 16:12:34.868283 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-31 16:12:34.869777 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-31 16:12:34.870717 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-31 16:12:34.871635 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-31 16:12:34.871768 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-31 16:12:34.872358 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-31 16:12:34.872722 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-31 16:12:34.873080 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-31 16:12:34.873477 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-31 16:12:34.873963 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-31 16:12:34.874299 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-31 16:12:34.874581 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-31 16:12:34.874967 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-31 16:12:34.875312 | orchestrator | 2025-05-31 16:12:34.875655 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:12:34.876409 | orchestrator | Saturday 31 May 2025 16:12:34 +0000 (0:00:00.451) 0:00:28.928 ********** 2025-05-31 16:12:35.071890 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:35.072113 | orchestrator | 2025-05-31 16:12:35.072631 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:12:35.072932 | orchestrator | Saturday 31 May 2025 16:12:35 +0000 (0:00:00.206) 0:00:29.134 ********** 2025-05-31 16:12:35.256715 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:35.258777 | orchestrator | 2025-05-31 16:12:35.258812 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:12:35.259114 | orchestrator | Saturday 31 May 2025 16:12:35 +0000 (0:00:00.185) 0:00:29.319 ********** 2025-05-31 16:12:35.786847 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:35.787239 | orchestrator | 2025-05-31 16:12:35.787695 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:12:35.788784 | orchestrator | Saturday 31 May 2025 16:12:35 +0000 (0:00:00.529) 0:00:29.849 ********** 2025-05-31 16:12:35.987345 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:35.987445 | orchestrator | 2025-05-31 16:12:35.988044 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:12:35.988992 | orchestrator | Saturday 31 May 2025 16:12:35 +0000 (0:00:00.199) 0:00:30.048 ********** 2025-05-31 16:12:36.216980 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:36.217437 | orchestrator | 2025-05-31 16:12:36.217823 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:12:36.218561 | orchestrator | Saturday 31 May 2025 16:12:36 +0000 (0:00:00.226) 0:00:30.275 ********** 2025-05-31 16:12:36.404102 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:36.404302 | orchestrator | 2025-05-31 16:12:36.404577 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:12:36.405068 | orchestrator | Saturday 31 May 2025 16:12:36 +0000 (0:00:00.190) 0:00:30.466 ********** 2025-05-31 16:12:36.593301 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:36.594515 | orchestrator | 2025-05-31 16:12:36.594743 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:12:36.595914 | orchestrator | Saturday 31 May 2025 16:12:36 +0000 (0:00:00.189) 0:00:30.655 ********** 2025-05-31 16:12:36.784121 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:36.784242 | orchestrator | 2025-05-31 16:12:36.784679 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:12:36.785127 | orchestrator | Saturday 31 May 2025 16:12:36 +0000 (0:00:00.191) 0:00:30.846 ********** 2025-05-31 16:12:37.412049 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-31 16:12:37.412859 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-31 16:12:37.413754 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-31 16:12:37.414856 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-31 16:12:37.415810 | orchestrator | 2025-05-31 16:12:37.416182 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:12:37.416730 | orchestrator | Saturday 31 May 2025 16:12:37 +0000 (0:00:00.625) 0:00:31.472 ********** 2025-05-31 16:12:37.602500 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:37.602628 | orchestrator | 2025-05-31 16:12:37.603075 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:12:37.603865 | orchestrator | Saturday 31 May 2025 16:12:37 +0000 (0:00:00.189) 0:00:31.662 ********** 2025-05-31 16:12:37.791484 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:37.791860 | orchestrator | 2025-05-31 16:12:37.792772 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:12:37.793655 | orchestrator | Saturday 31 May 2025 16:12:37 +0000 (0:00:00.191) 0:00:31.853 ********** 2025-05-31 16:12:37.981304 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:37.982297 | orchestrator | 2025-05-31 16:12:37.982451 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:12:37.983080 | orchestrator | Saturday 31 May 2025 16:12:37 +0000 (0:00:00.190) 0:00:32.044 ********** 2025-05-31 16:12:38.182613 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:38.182812 | orchestrator | 2025-05-31 16:12:38.182959 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-31 16:12:38.183103 | orchestrator | Saturday 31 May 2025 16:12:38 +0000 (0:00:00.200) 0:00:32.244 ********** 2025-05-31 16:12:38.481506 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:38.481707 | orchestrator | 2025-05-31 16:12:38.482398 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-31 16:12:38.482928 | orchestrator | Saturday 31 May 2025 16:12:38 +0000 (0:00:00.299) 0:00:32.543 ********** 2025-05-31 16:12:38.682334 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ad7aff40-0fc1-546d-9ec3-a4c69926416d'}}) 2025-05-31 16:12:38.683118 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '02409adc-b936-5a4c-b212-7809fa63c72a'}}) 2025-05-31 16:12:38.683166 | orchestrator | 2025-05-31 16:12:38.683391 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-31 16:12:38.685814 | orchestrator | Saturday 31 May 2025 16:12:38 +0000 (0:00:00.200) 0:00:32.744 ********** 2025-05-31 16:12:40.583334 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ad7aff40-0fc1-546d-9ec3-a4c69926416d', 'data_vg': 'ceph-ad7aff40-0fc1-546d-9ec3-a4c69926416d'}) 2025-05-31 16:12:40.584029 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-02409adc-b936-5a4c-b212-7809fa63c72a', 'data_vg': 'ceph-02409adc-b936-5a4c-b212-7809fa63c72a'}) 2025-05-31 16:12:40.585570 | orchestrator | 2025-05-31 16:12:40.585596 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-31 16:12:40.587557 | orchestrator | Saturday 31 May 2025 16:12:40 +0000 (0:00:01.899) 0:00:34.644 ********** 2025-05-31 16:12:40.744694 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ad7aff40-0fc1-546d-9ec3-a4c69926416d', 'data_vg': 'ceph-ad7aff40-0fc1-546d-9ec3-a4c69926416d'})  2025-05-31 16:12:40.745326 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-02409adc-b936-5a4c-b212-7809fa63c72a', 'data_vg': 'ceph-02409adc-b936-5a4c-b212-7809fa63c72a'})  2025-05-31 16:12:40.746088 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:40.748608 | orchestrator | 2025-05-31 16:12:40.748652 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-31 16:12:40.748668 | orchestrator | Saturday 31 May 2025 16:12:40 +0000 (0:00:00.162) 0:00:34.806 ********** 2025-05-31 16:12:42.099722 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ad7aff40-0fc1-546d-9ec3-a4c69926416d', 'data_vg': 'ceph-ad7aff40-0fc1-546d-9ec3-a4c69926416d'}) 2025-05-31 16:12:42.100934 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-02409adc-b936-5a4c-b212-7809fa63c72a', 'data_vg': 'ceph-02409adc-b936-5a4c-b212-7809fa63c72a'}) 2025-05-31 16:12:42.101950 | orchestrator | 2025-05-31 16:12:42.102737 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-31 16:12:42.103371 | orchestrator | Saturday 31 May 2025 16:12:42 +0000 (0:00:01.354) 0:00:36.161 ********** 2025-05-31 16:12:42.276343 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ad7aff40-0fc1-546d-9ec3-a4c69926416d', 'data_vg': 'ceph-ad7aff40-0fc1-546d-9ec3-a4c69926416d'})  2025-05-31 16:12:42.277450 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-02409adc-b936-5a4c-b212-7809fa63c72a', 'data_vg': 'ceph-02409adc-b936-5a4c-b212-7809fa63c72a'})  2025-05-31 16:12:42.280180 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:42.280239 | orchestrator | 2025-05-31 16:12:42.280253 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-31 16:12:42.280268 | orchestrator | Saturday 31 May 2025 16:12:42 +0000 (0:00:00.177) 0:00:36.338 ********** 2025-05-31 16:12:42.414332 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:42.415402 | orchestrator | 2025-05-31 16:12:42.416396 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-31 16:12:42.417176 | orchestrator | Saturday 31 May 2025 16:12:42 +0000 (0:00:00.137) 0:00:36.476 ********** 2025-05-31 16:12:42.573718 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ad7aff40-0fc1-546d-9ec3-a4c69926416d', 'data_vg': 'ceph-ad7aff40-0fc1-546d-9ec3-a4c69926416d'})  2025-05-31 16:12:42.573811 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-02409adc-b936-5a4c-b212-7809fa63c72a', 'data_vg': 'ceph-02409adc-b936-5a4c-b212-7809fa63c72a'})  2025-05-31 16:12:42.574392 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:42.575330 | orchestrator | 2025-05-31 16:12:42.576149 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-31 16:12:42.577168 | orchestrator | Saturday 31 May 2025 16:12:42 +0000 (0:00:00.158) 0:00:36.634 ********** 2025-05-31 16:12:42.707701 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:42.707810 | orchestrator | 2025-05-31 16:12:42.708530 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-31 16:12:42.709477 | orchestrator | Saturday 31 May 2025 16:12:42 +0000 (0:00:00.133) 0:00:36.767 ********** 2025-05-31 16:12:42.986248 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ad7aff40-0fc1-546d-9ec3-a4c69926416d', 'data_vg': 'ceph-ad7aff40-0fc1-546d-9ec3-a4c69926416d'})  2025-05-31 16:12:42.986591 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-02409adc-b936-5a4c-b212-7809fa63c72a', 'data_vg': 'ceph-02409adc-b936-5a4c-b212-7809fa63c72a'})  2025-05-31 16:12:42.986997 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:42.987728 | orchestrator | 2025-05-31 16:12:42.988729 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-31 16:12:42.989094 | orchestrator | Saturday 31 May 2025 16:12:42 +0000 (0:00:00.281) 0:00:37.049 ********** 2025-05-31 16:12:43.125278 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:43.126511 | orchestrator | 2025-05-31 16:12:43.128321 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-31 16:12:43.129080 | orchestrator | Saturday 31 May 2025 16:12:43 +0000 (0:00:00.137) 0:00:37.187 ********** 2025-05-31 16:12:43.289696 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ad7aff40-0fc1-546d-9ec3-a4c69926416d', 'data_vg': 'ceph-ad7aff40-0fc1-546d-9ec3-a4c69926416d'})  2025-05-31 16:12:43.290117 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-02409adc-b936-5a4c-b212-7809fa63c72a', 'data_vg': 'ceph-02409adc-b936-5a4c-b212-7809fa63c72a'})  2025-05-31 16:12:43.291420 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:43.292143 | orchestrator | 2025-05-31 16:12:43.294255 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-31 16:12:43.294294 | orchestrator | Saturday 31 May 2025 16:12:43 +0000 (0:00:00.165) 0:00:37.352 ********** 2025-05-31 16:12:43.433483 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:12:43.433704 | orchestrator | 2025-05-31 16:12:43.435070 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-31 16:12:43.437317 | orchestrator | Saturday 31 May 2025 16:12:43 +0000 (0:00:00.143) 0:00:37.496 ********** 2025-05-31 16:12:43.597741 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ad7aff40-0fc1-546d-9ec3-a4c69926416d', 'data_vg': 'ceph-ad7aff40-0fc1-546d-9ec3-a4c69926416d'})  2025-05-31 16:12:43.598452 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-02409adc-b936-5a4c-b212-7809fa63c72a', 'data_vg': 'ceph-02409adc-b936-5a4c-b212-7809fa63c72a'})  2025-05-31 16:12:43.598894 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:43.599769 | orchestrator | 2025-05-31 16:12:43.600808 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-31 16:12:43.601237 | orchestrator | Saturday 31 May 2025 16:12:43 +0000 (0:00:00.163) 0:00:37.659 ********** 2025-05-31 16:12:43.759192 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ad7aff40-0fc1-546d-9ec3-a4c69926416d', 'data_vg': 'ceph-ad7aff40-0fc1-546d-9ec3-a4c69926416d'})  2025-05-31 16:12:43.760012 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-02409adc-b936-5a4c-b212-7809fa63c72a', 'data_vg': 'ceph-02409adc-b936-5a4c-b212-7809fa63c72a'})  2025-05-31 16:12:43.762494 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:43.762827 | orchestrator | 2025-05-31 16:12:43.763217 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-31 16:12:43.763647 | orchestrator | Saturday 31 May 2025 16:12:43 +0000 (0:00:00.160) 0:00:37.820 ********** 2025-05-31 16:12:43.914701 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ad7aff40-0fc1-546d-9ec3-a4c69926416d', 'data_vg': 'ceph-ad7aff40-0fc1-546d-9ec3-a4c69926416d'})  2025-05-31 16:12:43.915596 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-02409adc-b936-5a4c-b212-7809fa63c72a', 'data_vg': 'ceph-02409adc-b936-5a4c-b212-7809fa63c72a'})  2025-05-31 16:12:43.916301 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:43.917061 | orchestrator | 2025-05-31 16:12:43.919798 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-31 16:12:43.920102 | orchestrator | Saturday 31 May 2025 16:12:43 +0000 (0:00:00.156) 0:00:37.976 ********** 2025-05-31 16:12:44.042721 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:44.042903 | orchestrator | 2025-05-31 16:12:44.043373 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-31 16:12:44.043988 | orchestrator | Saturday 31 May 2025 16:12:44 +0000 (0:00:00.128) 0:00:38.105 ********** 2025-05-31 16:12:44.179658 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:44.179907 | orchestrator | 2025-05-31 16:12:44.181411 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-31 16:12:44.181943 | orchestrator | Saturday 31 May 2025 16:12:44 +0000 (0:00:00.136) 0:00:38.241 ********** 2025-05-31 16:12:44.313628 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:44.313839 | orchestrator | 2025-05-31 16:12:44.314364 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-31 16:12:44.314820 | orchestrator | Saturday 31 May 2025 16:12:44 +0000 (0:00:00.134) 0:00:38.375 ********** 2025-05-31 16:12:44.452406 | orchestrator | ok: [testbed-node-4] => { 2025-05-31 16:12:44.453780 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-31 16:12:44.456397 | orchestrator | } 2025-05-31 16:12:44.456477 | orchestrator | 2025-05-31 16:12:44.456590 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-31 16:12:44.457109 | orchestrator | Saturday 31 May 2025 16:12:44 +0000 (0:00:00.137) 0:00:38.513 ********** 2025-05-31 16:12:44.784705 | orchestrator | ok: [testbed-node-4] => { 2025-05-31 16:12:44.784868 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-31 16:12:44.785440 | orchestrator | } 2025-05-31 16:12:44.785999 | orchestrator | 2025-05-31 16:12:44.786656 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-31 16:12:44.787247 | orchestrator | Saturday 31 May 2025 16:12:44 +0000 (0:00:00.331) 0:00:38.845 ********** 2025-05-31 16:12:44.929368 | orchestrator | ok: [testbed-node-4] => { 2025-05-31 16:12:44.929508 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-31 16:12:44.930092 | orchestrator | } 2025-05-31 16:12:44.930687 | orchestrator | 2025-05-31 16:12:44.931142 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-31 16:12:44.932011 | orchestrator | Saturday 31 May 2025 16:12:44 +0000 (0:00:00.145) 0:00:38.991 ********** 2025-05-31 16:12:45.436902 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:12:45.437260 | orchestrator | 2025-05-31 16:12:45.438084 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-31 16:12:45.438685 | orchestrator | Saturday 31 May 2025 16:12:45 +0000 (0:00:00.506) 0:00:39.498 ********** 2025-05-31 16:12:45.967279 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:12:45.967458 | orchestrator | 2025-05-31 16:12:45.968686 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-31 16:12:45.968899 | orchestrator | Saturday 31 May 2025 16:12:45 +0000 (0:00:00.529) 0:00:40.028 ********** 2025-05-31 16:12:46.466627 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:12:46.468443 | orchestrator | 2025-05-31 16:12:46.468480 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-31 16:12:46.468525 | orchestrator | Saturday 31 May 2025 16:12:46 +0000 (0:00:00.500) 0:00:40.529 ********** 2025-05-31 16:12:46.602858 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:12:46.605023 | orchestrator | 2025-05-31 16:12:46.605057 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-31 16:12:46.605072 | orchestrator | Saturday 31 May 2025 16:12:46 +0000 (0:00:00.134) 0:00:40.663 ********** 2025-05-31 16:12:46.730458 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:46.731171 | orchestrator | 2025-05-31 16:12:46.731835 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-31 16:12:46.732774 | orchestrator | Saturday 31 May 2025 16:12:46 +0000 (0:00:00.128) 0:00:40.792 ********** 2025-05-31 16:12:46.846744 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:46.846925 | orchestrator | 2025-05-31 16:12:46.847901 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-31 16:12:46.849881 | orchestrator | Saturday 31 May 2025 16:12:46 +0000 (0:00:00.116) 0:00:40.909 ********** 2025-05-31 16:12:46.984036 | orchestrator | ok: [testbed-node-4] => { 2025-05-31 16:12:46.984422 | orchestrator |  "vgs_report": { 2025-05-31 16:12:46.985455 | orchestrator |  "vg": [] 2025-05-31 16:12:46.986372 | orchestrator |  } 2025-05-31 16:12:46.987490 | orchestrator | } 2025-05-31 16:12:46.988350 | orchestrator | 2025-05-31 16:12:46.989402 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-31 16:12:46.990285 | orchestrator | Saturday 31 May 2025 16:12:46 +0000 (0:00:00.137) 0:00:41.046 ********** 2025-05-31 16:12:47.116291 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:47.116800 | orchestrator | 2025-05-31 16:12:47.117880 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-31 16:12:47.117985 | orchestrator | Saturday 31 May 2025 16:12:47 +0000 (0:00:00.124) 0:00:41.170 ********** 2025-05-31 16:12:47.247962 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:47.248189 | orchestrator | 2025-05-31 16:12:47.249001 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-31 16:12:47.249309 | orchestrator | Saturday 31 May 2025 16:12:47 +0000 (0:00:00.140) 0:00:41.311 ********** 2025-05-31 16:12:47.540959 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:47.541071 | orchestrator | 2025-05-31 16:12:47.541142 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-31 16:12:47.541415 | orchestrator | Saturday 31 May 2025 16:12:47 +0000 (0:00:00.292) 0:00:41.603 ********** 2025-05-31 16:12:47.670341 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:47.670443 | orchestrator | 2025-05-31 16:12:47.670668 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-31 16:12:47.670891 | orchestrator | Saturday 31 May 2025 16:12:47 +0000 (0:00:00.129) 0:00:41.733 ********** 2025-05-31 16:12:47.798793 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:47.798883 | orchestrator | 2025-05-31 16:12:47.799103 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-31 16:12:47.799299 | orchestrator | Saturday 31 May 2025 16:12:47 +0000 (0:00:00.127) 0:00:41.861 ********** 2025-05-31 16:12:47.925189 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:47.928249 | orchestrator | 2025-05-31 16:12:47.928307 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-31 16:12:47.928331 | orchestrator | Saturday 31 May 2025 16:12:47 +0000 (0:00:00.124) 0:00:41.986 ********** 2025-05-31 16:12:48.061605 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:48.062247 | orchestrator | 2025-05-31 16:12:48.062990 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-31 16:12:48.063934 | orchestrator | Saturday 31 May 2025 16:12:48 +0000 (0:00:00.137) 0:00:42.124 ********** 2025-05-31 16:12:48.203611 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:48.203705 | orchestrator | 2025-05-31 16:12:48.203928 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-31 16:12:48.204553 | orchestrator | Saturday 31 May 2025 16:12:48 +0000 (0:00:00.141) 0:00:42.265 ********** 2025-05-31 16:12:48.339302 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:48.339393 | orchestrator | 2025-05-31 16:12:48.339946 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-31 16:12:48.340926 | orchestrator | Saturday 31 May 2025 16:12:48 +0000 (0:00:00.136) 0:00:42.402 ********** 2025-05-31 16:12:48.472098 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:48.472311 | orchestrator | 2025-05-31 16:12:48.472860 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-31 16:12:48.473557 | orchestrator | Saturday 31 May 2025 16:12:48 +0000 (0:00:00.133) 0:00:42.535 ********** 2025-05-31 16:12:48.603579 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:48.604004 | orchestrator | 2025-05-31 16:12:48.605153 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-31 16:12:48.605720 | orchestrator | Saturday 31 May 2025 16:12:48 +0000 (0:00:00.130) 0:00:42.666 ********** 2025-05-31 16:12:48.734455 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:48.734642 | orchestrator | 2025-05-31 16:12:48.735190 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-31 16:12:48.738749 | orchestrator | Saturday 31 May 2025 16:12:48 +0000 (0:00:00.130) 0:00:42.796 ********** 2025-05-31 16:12:48.860015 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:48.860106 | orchestrator | 2025-05-31 16:12:48.860273 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-31 16:12:48.860355 | orchestrator | Saturday 31 May 2025 16:12:48 +0000 (0:00:00.126) 0:00:42.923 ********** 2025-05-31 16:12:48.983739 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:48.983818 | orchestrator | 2025-05-31 16:12:48.984252 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-31 16:12:48.984423 | orchestrator | Saturday 31 May 2025 16:12:48 +0000 (0:00:00.123) 0:00:43.047 ********** 2025-05-31 16:12:49.315158 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ad7aff40-0fc1-546d-9ec3-a4c69926416d', 'data_vg': 'ceph-ad7aff40-0fc1-546d-9ec3-a4c69926416d'})  2025-05-31 16:12:49.318444 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-02409adc-b936-5a4c-b212-7809fa63c72a', 'data_vg': 'ceph-02409adc-b936-5a4c-b212-7809fa63c72a'})  2025-05-31 16:12:49.318486 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:49.318500 | orchestrator | 2025-05-31 16:12:49.318513 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-31 16:12:49.318526 | orchestrator | Saturday 31 May 2025 16:12:49 +0000 (0:00:00.330) 0:00:43.377 ********** 2025-05-31 16:12:49.494267 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ad7aff40-0fc1-546d-9ec3-a4c69926416d', 'data_vg': 'ceph-ad7aff40-0fc1-546d-9ec3-a4c69926416d'})  2025-05-31 16:12:49.495078 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-02409adc-b936-5a4c-b212-7809fa63c72a', 'data_vg': 'ceph-02409adc-b936-5a4c-b212-7809fa63c72a'})  2025-05-31 16:12:49.495760 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:49.496414 | orchestrator | 2025-05-31 16:12:49.497784 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-31 16:12:49.498818 | orchestrator | Saturday 31 May 2025 16:12:49 +0000 (0:00:00.179) 0:00:43.556 ********** 2025-05-31 16:12:49.657262 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ad7aff40-0fc1-546d-9ec3-a4c69926416d', 'data_vg': 'ceph-ad7aff40-0fc1-546d-9ec3-a4c69926416d'})  2025-05-31 16:12:49.657359 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-02409adc-b936-5a4c-b212-7809fa63c72a', 'data_vg': 'ceph-02409adc-b936-5a4c-b212-7809fa63c72a'})  2025-05-31 16:12:49.657592 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:49.659968 | orchestrator | 2025-05-31 16:12:49.659994 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-31 16:12:49.660009 | orchestrator | Saturday 31 May 2025 16:12:49 +0000 (0:00:00.162) 0:00:43.719 ********** 2025-05-31 16:12:49.813691 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ad7aff40-0fc1-546d-9ec3-a4c69926416d', 'data_vg': 'ceph-ad7aff40-0fc1-546d-9ec3-a4c69926416d'})  2025-05-31 16:12:49.813820 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-02409adc-b936-5a4c-b212-7809fa63c72a', 'data_vg': 'ceph-02409adc-b936-5a4c-b212-7809fa63c72a'})  2025-05-31 16:12:49.814136 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:49.814685 | orchestrator | 2025-05-31 16:12:49.815282 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-31 16:12:49.815890 | orchestrator | Saturday 31 May 2025 16:12:49 +0000 (0:00:00.155) 0:00:43.875 ********** 2025-05-31 16:12:49.979407 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ad7aff40-0fc1-546d-9ec3-a4c69926416d', 'data_vg': 'ceph-ad7aff40-0fc1-546d-9ec3-a4c69926416d'})  2025-05-31 16:12:49.979476 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-02409adc-b936-5a4c-b212-7809fa63c72a', 'data_vg': 'ceph-02409adc-b936-5a4c-b212-7809fa63c72a'})  2025-05-31 16:12:49.981920 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:49.981943 | orchestrator | 2025-05-31 16:12:49.981956 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-31 16:12:49.982182 | orchestrator | Saturday 31 May 2025 16:12:49 +0000 (0:00:00.164) 0:00:44.040 ********** 2025-05-31 16:12:50.137470 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ad7aff40-0fc1-546d-9ec3-a4c69926416d', 'data_vg': 'ceph-ad7aff40-0fc1-546d-9ec3-a4c69926416d'})  2025-05-31 16:12:50.137543 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-02409adc-b936-5a4c-b212-7809fa63c72a', 'data_vg': 'ceph-02409adc-b936-5a4c-b212-7809fa63c72a'})  2025-05-31 16:12:50.138620 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:50.139077 | orchestrator | 2025-05-31 16:12:50.139784 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-31 16:12:50.140586 | orchestrator | Saturday 31 May 2025 16:12:50 +0000 (0:00:00.158) 0:00:44.198 ********** 2025-05-31 16:12:50.299770 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ad7aff40-0fc1-546d-9ec3-a4c69926416d', 'data_vg': 'ceph-ad7aff40-0fc1-546d-9ec3-a4c69926416d'})  2025-05-31 16:12:50.301487 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-02409adc-b936-5a4c-b212-7809fa63c72a', 'data_vg': 'ceph-02409adc-b936-5a4c-b212-7809fa63c72a'})  2025-05-31 16:12:50.302382 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:50.303254 | orchestrator | 2025-05-31 16:12:50.304166 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-31 16:12:50.304727 | orchestrator | Saturday 31 May 2025 16:12:50 +0000 (0:00:00.162) 0:00:44.361 ********** 2025-05-31 16:12:50.455582 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ad7aff40-0fc1-546d-9ec3-a4c69926416d', 'data_vg': 'ceph-ad7aff40-0fc1-546d-9ec3-a4c69926416d'})  2025-05-31 16:12:50.456267 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-02409adc-b936-5a4c-b212-7809fa63c72a', 'data_vg': 'ceph-02409adc-b936-5a4c-b212-7809fa63c72a'})  2025-05-31 16:12:50.456510 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:50.457157 | orchestrator | 2025-05-31 16:12:50.458257 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-31 16:12:50.459349 | orchestrator | Saturday 31 May 2025 16:12:50 +0000 (0:00:00.156) 0:00:44.517 ********** 2025-05-31 16:12:50.960480 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:12:50.961058 | orchestrator | 2025-05-31 16:12:50.962067 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-31 16:12:50.962942 | orchestrator | Saturday 31 May 2025 16:12:50 +0000 (0:00:00.502) 0:00:45.020 ********** 2025-05-31 16:12:51.466933 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:12:51.467087 | orchestrator | 2025-05-31 16:12:51.468104 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-31 16:12:51.468639 | orchestrator | Saturday 31 May 2025 16:12:51 +0000 (0:00:00.508) 0:00:45.529 ********** 2025-05-31 16:12:51.608247 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:12:51.608571 | orchestrator | 2025-05-31 16:12:51.609535 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-31 16:12:51.610086 | orchestrator | Saturday 31 May 2025 16:12:51 +0000 (0:00:00.142) 0:00:45.671 ********** 2025-05-31 16:12:51.944763 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-02409adc-b936-5a4c-b212-7809fa63c72a', 'vg_name': 'ceph-02409adc-b936-5a4c-b212-7809fa63c72a'}) 2025-05-31 16:12:51.945007 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-ad7aff40-0fc1-546d-9ec3-a4c69926416d', 'vg_name': 'ceph-ad7aff40-0fc1-546d-9ec3-a4c69926416d'}) 2025-05-31 16:12:51.946188 | orchestrator | 2025-05-31 16:12:51.947355 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-31 16:12:51.947709 | orchestrator | Saturday 31 May 2025 16:12:51 +0000 (0:00:00.334) 0:00:46.005 ********** 2025-05-31 16:12:52.112351 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ad7aff40-0fc1-546d-9ec3-a4c69926416d', 'data_vg': 'ceph-ad7aff40-0fc1-546d-9ec3-a4c69926416d'})  2025-05-31 16:12:52.113163 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-02409adc-b936-5a4c-b212-7809fa63c72a', 'data_vg': 'ceph-02409adc-b936-5a4c-b212-7809fa63c72a'})  2025-05-31 16:12:52.114490 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:52.115822 | orchestrator | 2025-05-31 16:12:52.116995 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-31 16:12:52.117669 | orchestrator | Saturday 31 May 2025 16:12:52 +0000 (0:00:00.169) 0:00:46.174 ********** 2025-05-31 16:12:52.271298 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ad7aff40-0fc1-546d-9ec3-a4c69926416d', 'data_vg': 'ceph-ad7aff40-0fc1-546d-9ec3-a4c69926416d'})  2025-05-31 16:12:52.271905 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-02409adc-b936-5a4c-b212-7809fa63c72a', 'data_vg': 'ceph-02409adc-b936-5a4c-b212-7809fa63c72a'})  2025-05-31 16:12:52.272096 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:52.273471 | orchestrator | 2025-05-31 16:12:52.273734 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-31 16:12:52.274765 | orchestrator | Saturday 31 May 2025 16:12:52 +0000 (0:00:00.159) 0:00:46.334 ********** 2025-05-31 16:12:52.456313 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ad7aff40-0fc1-546d-9ec3-a4c69926416d', 'data_vg': 'ceph-ad7aff40-0fc1-546d-9ec3-a4c69926416d'})  2025-05-31 16:12:52.456931 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-02409adc-b936-5a4c-b212-7809fa63c72a', 'data_vg': 'ceph-02409adc-b936-5a4c-b212-7809fa63c72a'})  2025-05-31 16:12:52.457360 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:12:52.458163 | orchestrator | 2025-05-31 16:12:52.458704 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-31 16:12:52.459103 | orchestrator | Saturday 31 May 2025 16:12:52 +0000 (0:00:00.184) 0:00:46.518 ********** 2025-05-31 16:12:53.261940 | orchestrator | ok: [testbed-node-4] => { 2025-05-31 16:12:53.262284 | orchestrator |  "lvm_report": { 2025-05-31 16:12:53.265756 | orchestrator |  "lv": [ 2025-05-31 16:12:53.265888 | orchestrator |  { 2025-05-31 16:12:53.265965 | orchestrator |  "lv_name": "osd-block-02409adc-b936-5a4c-b212-7809fa63c72a", 2025-05-31 16:12:53.266878 | orchestrator |  "vg_name": "ceph-02409adc-b936-5a4c-b212-7809fa63c72a" 2025-05-31 16:12:53.267022 | orchestrator |  }, 2025-05-31 16:12:53.267353 | orchestrator |  { 2025-05-31 16:12:53.269640 | orchestrator |  "lv_name": "osd-block-ad7aff40-0fc1-546d-9ec3-a4c69926416d", 2025-05-31 16:12:53.269964 | orchestrator |  "vg_name": "ceph-ad7aff40-0fc1-546d-9ec3-a4c69926416d" 2025-05-31 16:12:53.270680 | orchestrator |  } 2025-05-31 16:12:53.271417 | orchestrator |  ], 2025-05-31 16:12:53.271705 | orchestrator |  "pv": [ 2025-05-31 16:12:53.272916 | orchestrator |  { 2025-05-31 16:12:53.273906 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-31 16:12:53.274641 | orchestrator |  "vg_name": "ceph-ad7aff40-0fc1-546d-9ec3-a4c69926416d" 2025-05-31 16:12:53.275096 | orchestrator |  }, 2025-05-31 16:12:53.276743 | orchestrator |  { 2025-05-31 16:12:53.276770 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-31 16:12:53.277848 | orchestrator |  "vg_name": "ceph-02409adc-b936-5a4c-b212-7809fa63c72a" 2025-05-31 16:12:53.277872 | orchestrator |  } 2025-05-31 16:12:53.278202 | orchestrator |  ] 2025-05-31 16:12:53.279643 | orchestrator |  } 2025-05-31 16:12:53.280289 | orchestrator | } 2025-05-31 16:12:53.280846 | orchestrator | 2025-05-31 16:12:53.281517 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-31 16:12:53.282905 | orchestrator | 2025-05-31 16:12:53.282943 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-31 16:12:53.283188 | orchestrator | Saturday 31 May 2025 16:12:53 +0000 (0:00:00.804) 0:00:47.323 ********** 2025-05-31 16:12:53.511002 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-31 16:12:53.511699 | orchestrator | 2025-05-31 16:12:53.513400 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-31 16:12:53.513447 | orchestrator | Saturday 31 May 2025 16:12:53 +0000 (0:00:00.249) 0:00:47.573 ********** 2025-05-31 16:12:53.726451 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:12:53.727337 | orchestrator | 2025-05-31 16:12:53.729076 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:12:53.729448 | orchestrator | Saturday 31 May 2025 16:12:53 +0000 (0:00:00.215) 0:00:47.789 ********** 2025-05-31 16:12:54.172742 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-31 16:12:54.172941 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-31 16:12:54.173676 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-31 16:12:54.174266 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-31 16:12:54.174461 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-31 16:12:54.174858 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-31 16:12:54.179060 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-31 16:12:54.179169 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-31 16:12:54.179493 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-31 16:12:54.179984 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-31 16:12:54.180195 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-31 16:12:54.180469 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-31 16:12:54.181811 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-31 16:12:54.182150 | orchestrator | 2025-05-31 16:12:54.182783 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:12:54.186322 | orchestrator | Saturday 31 May 2025 16:12:54 +0000 (0:00:00.444) 0:00:48.233 ********** 2025-05-31 16:12:54.367737 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:12:54.368083 | orchestrator | 2025-05-31 16:12:54.369405 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:12:54.369873 | orchestrator | Saturday 31 May 2025 16:12:54 +0000 (0:00:00.196) 0:00:48.430 ********** 2025-05-31 16:12:54.554487 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:12:54.554784 | orchestrator | 2025-05-31 16:12:54.555496 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:12:54.556245 | orchestrator | Saturday 31 May 2025 16:12:54 +0000 (0:00:00.186) 0:00:48.616 ********** 2025-05-31 16:12:54.747880 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:12:54.748092 | orchestrator | 2025-05-31 16:12:54.749530 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:12:54.749920 | orchestrator | Saturday 31 May 2025 16:12:54 +0000 (0:00:00.193) 0:00:48.810 ********** 2025-05-31 16:12:54.946412 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:12:54.948676 | orchestrator | 2025-05-31 16:12:54.952080 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:12:54.952903 | orchestrator | Saturday 31 May 2025 16:12:54 +0000 (0:00:00.197) 0:00:49.007 ********** 2025-05-31 16:12:55.140730 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:12:55.141014 | orchestrator | 2025-05-31 16:12:55.141895 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:12:55.142649 | orchestrator | Saturday 31 May 2025 16:12:55 +0000 (0:00:00.195) 0:00:49.203 ********** 2025-05-31 16:12:55.319293 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:12:55.319428 | orchestrator | 2025-05-31 16:12:55.320271 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:12:55.320851 | orchestrator | Saturday 31 May 2025 16:12:55 +0000 (0:00:00.177) 0:00:49.381 ********** 2025-05-31 16:12:55.690514 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:12:55.690859 | orchestrator | 2025-05-31 16:12:55.692041 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:12:55.695866 | orchestrator | Saturday 31 May 2025 16:12:55 +0000 (0:00:00.371) 0:00:49.752 ********** 2025-05-31 16:12:55.896465 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:12:55.897441 | orchestrator | 2025-05-31 16:12:55.897494 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:12:55.897870 | orchestrator | Saturday 31 May 2025 16:12:55 +0000 (0:00:00.205) 0:00:49.958 ********** 2025-05-31 16:12:56.300084 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4c2c2026-b533-49c5-9163-99a4bbff9cf3) 2025-05-31 16:12:56.300300 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4c2c2026-b533-49c5-9163-99a4bbff9cf3) 2025-05-31 16:12:56.300431 | orchestrator | 2025-05-31 16:12:56.303432 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:12:56.303574 | orchestrator | Saturday 31 May 2025 16:12:56 +0000 (0:00:00.403) 0:00:50.362 ********** 2025-05-31 16:12:56.732038 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_88f31b31-d9a0-4986-b3f2-c890facc2af6) 2025-05-31 16:12:56.732477 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_88f31b31-d9a0-4986-b3f2-c890facc2af6) 2025-05-31 16:12:56.733456 | orchestrator | 2025-05-31 16:12:56.734360 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:12:56.737593 | orchestrator | Saturday 31 May 2025 16:12:56 +0000 (0:00:00.432) 0:00:50.794 ********** 2025-05-31 16:12:57.144307 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_42e25697-c4c6-4260-bea6-0d0d8bf43604) 2025-05-31 16:12:57.144670 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_42e25697-c4c6-4260-bea6-0d0d8bf43604) 2025-05-31 16:12:57.145826 | orchestrator | 2025-05-31 16:12:57.146407 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:12:57.147127 | orchestrator | Saturday 31 May 2025 16:12:57 +0000 (0:00:00.410) 0:00:51.205 ********** 2025-05-31 16:12:57.550348 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_86312f09-330b-437e-8315-0e2c008d5fbe) 2025-05-31 16:12:57.551375 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_86312f09-330b-437e-8315-0e2c008d5fbe) 2025-05-31 16:12:57.552120 | orchestrator | 2025-05-31 16:12:57.552632 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 16:12:57.553260 | orchestrator | Saturday 31 May 2025 16:12:57 +0000 (0:00:00.406) 0:00:51.611 ********** 2025-05-31 16:12:57.882337 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-31 16:12:57.883646 | orchestrator | 2025-05-31 16:12:57.885736 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:12:57.886139 | orchestrator | Saturday 31 May 2025 16:12:57 +0000 (0:00:00.332) 0:00:51.944 ********** 2025-05-31 16:12:58.332065 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-31 16:12:58.332754 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-31 16:12:58.334423 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-31 16:12:58.335323 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-31 16:12:58.336685 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-31 16:12:58.338099 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-31 16:12:58.338797 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-31 16:12:58.340118 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-31 16:12:58.340780 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-31 16:12:58.341368 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-31 16:12:58.342348 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-31 16:12:58.342992 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-31 16:12:58.343941 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-31 16:12:58.345267 | orchestrator | 2025-05-31 16:12:58.345681 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:12:58.346185 | orchestrator | Saturday 31 May 2025 16:12:58 +0000 (0:00:00.450) 0:00:52.394 ********** 2025-05-31 16:12:58.528971 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:12:58.529863 | orchestrator | 2025-05-31 16:12:58.530921 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:12:58.531785 | orchestrator | Saturday 31 May 2025 16:12:58 +0000 (0:00:00.196) 0:00:52.590 ********** 2025-05-31 16:12:59.063952 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:12:59.065179 | orchestrator | 2025-05-31 16:12:59.065883 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:12:59.066719 | orchestrator | Saturday 31 May 2025 16:12:59 +0000 (0:00:00.533) 0:00:53.124 ********** 2025-05-31 16:12:59.271558 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:12:59.271975 | orchestrator | 2025-05-31 16:12:59.274610 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:12:59.274701 | orchestrator | Saturday 31 May 2025 16:12:59 +0000 (0:00:00.207) 0:00:53.331 ********** 2025-05-31 16:12:59.473099 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:12:59.473207 | orchestrator | 2025-05-31 16:12:59.473313 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:12:59.473329 | orchestrator | Saturday 31 May 2025 16:12:59 +0000 (0:00:00.200) 0:00:53.531 ********** 2025-05-31 16:12:59.659562 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:12:59.660002 | orchestrator | 2025-05-31 16:12:59.661269 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:12:59.661921 | orchestrator | Saturday 31 May 2025 16:12:59 +0000 (0:00:00.189) 0:00:53.721 ********** 2025-05-31 16:12:59.854315 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:12:59.855066 | orchestrator | 2025-05-31 16:12:59.856406 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:12:59.858956 | orchestrator | Saturday 31 May 2025 16:12:59 +0000 (0:00:00.194) 0:00:53.915 ********** 2025-05-31 16:13:00.042953 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:00.045429 | orchestrator | 2025-05-31 16:13:00.045894 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:13:00.046990 | orchestrator | Saturday 31 May 2025 16:13:00 +0000 (0:00:00.187) 0:00:54.103 ********** 2025-05-31 16:13:00.240791 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:00.241566 | orchestrator | 2025-05-31 16:13:00.242689 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:13:00.243378 | orchestrator | Saturday 31 May 2025 16:13:00 +0000 (0:00:00.200) 0:00:54.303 ********** 2025-05-31 16:13:01.023034 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-31 16:13:01.024133 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-31 16:13:01.025457 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-31 16:13:01.026282 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-31 16:13:01.027006 | orchestrator | 2025-05-31 16:13:01.027930 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:13:01.028706 | orchestrator | Saturday 31 May 2025 16:13:01 +0000 (0:00:00.780) 0:00:55.083 ********** 2025-05-31 16:13:01.227732 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:01.228179 | orchestrator | 2025-05-31 16:13:01.229266 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:13:01.230201 | orchestrator | Saturday 31 May 2025 16:13:01 +0000 (0:00:00.205) 0:00:55.289 ********** 2025-05-31 16:13:01.428113 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:01.428414 | orchestrator | 2025-05-31 16:13:01.432058 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:13:01.432265 | orchestrator | Saturday 31 May 2025 16:13:01 +0000 (0:00:00.199) 0:00:55.489 ********** 2025-05-31 16:13:02.005723 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:02.006363 | orchestrator | 2025-05-31 16:13:02.007559 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 16:13:02.011085 | orchestrator | Saturday 31 May 2025 16:13:01 +0000 (0:00:00.577) 0:00:56.066 ********** 2025-05-31 16:13:02.221925 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:02.222066 | orchestrator | 2025-05-31 16:13:02.222763 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-31 16:13:02.223327 | orchestrator | Saturday 31 May 2025 16:13:02 +0000 (0:00:00.216) 0:00:56.282 ********** 2025-05-31 16:13:02.354628 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:02.354823 | orchestrator | 2025-05-31 16:13:02.358973 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-31 16:13:02.359002 | orchestrator | Saturday 31 May 2025 16:13:02 +0000 (0:00:00.133) 0:00:56.416 ********** 2025-05-31 16:13:02.555332 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6a818804-e2a7-5d8b-beae-a4acf44277a5'}}) 2025-05-31 16:13:02.555511 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8b45f5b5-5599-560e-b955-f5f9e148b85f'}}) 2025-05-31 16:13:02.555858 | orchestrator | 2025-05-31 16:13:02.557975 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-31 16:13:02.559306 | orchestrator | Saturday 31 May 2025 16:13:02 +0000 (0:00:00.200) 0:00:56.616 ********** 2025-05-31 16:13:04.404619 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-6a818804-e2a7-5d8b-beae-a4acf44277a5', 'data_vg': 'ceph-6a818804-e2a7-5d8b-beae-a4acf44277a5'}) 2025-05-31 16:13:04.404725 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8b45f5b5-5599-560e-b955-f5f9e148b85f', 'data_vg': 'ceph-8b45f5b5-5599-560e-b955-f5f9e148b85f'}) 2025-05-31 16:13:04.404741 | orchestrator | 2025-05-31 16:13:04.404772 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-31 16:13:04.405010 | orchestrator | Saturday 31 May 2025 16:13:04 +0000 (0:00:01.848) 0:00:58.465 ********** 2025-05-31 16:13:04.569520 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6a818804-e2a7-5d8b-beae-a4acf44277a5', 'data_vg': 'ceph-6a818804-e2a7-5d8b-beae-a4acf44277a5'})  2025-05-31 16:13:04.569643 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8b45f5b5-5599-560e-b955-f5f9e148b85f', 'data_vg': 'ceph-8b45f5b5-5599-560e-b955-f5f9e148b85f'})  2025-05-31 16:13:04.569714 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:04.570111 | orchestrator | 2025-05-31 16:13:04.571493 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-31 16:13:04.572319 | orchestrator | Saturday 31 May 2025 16:13:04 +0000 (0:00:00.165) 0:00:58.630 ********** 2025-05-31 16:13:05.963389 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-6a818804-e2a7-5d8b-beae-a4acf44277a5', 'data_vg': 'ceph-6a818804-e2a7-5d8b-beae-a4acf44277a5'}) 2025-05-31 16:13:05.964303 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8b45f5b5-5599-560e-b955-f5f9e148b85f', 'data_vg': 'ceph-8b45f5b5-5599-560e-b955-f5f9e148b85f'}) 2025-05-31 16:13:05.964972 | orchestrator | 2025-05-31 16:13:05.965413 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-31 16:13:05.966332 | orchestrator | Saturday 31 May 2025 16:13:05 +0000 (0:00:01.393) 0:01:00.023 ********** 2025-05-31 16:13:06.121457 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6a818804-e2a7-5d8b-beae-a4acf44277a5', 'data_vg': 'ceph-6a818804-e2a7-5d8b-beae-a4acf44277a5'})  2025-05-31 16:13:06.122413 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8b45f5b5-5599-560e-b955-f5f9e148b85f', 'data_vg': 'ceph-8b45f5b5-5599-560e-b955-f5f9e148b85f'})  2025-05-31 16:13:06.123568 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:06.124166 | orchestrator | 2025-05-31 16:13:06.125478 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-31 16:13:06.126160 | orchestrator | Saturday 31 May 2025 16:13:06 +0000 (0:00:00.159) 0:01:00.182 ********** 2025-05-31 16:13:06.278679 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:06.278923 | orchestrator | 2025-05-31 16:13:06.279893 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-31 16:13:06.282467 | orchestrator | Saturday 31 May 2025 16:13:06 +0000 (0:00:00.156) 0:01:00.339 ********** 2025-05-31 16:13:06.568509 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6a818804-e2a7-5d8b-beae-a4acf44277a5', 'data_vg': 'ceph-6a818804-e2a7-5d8b-beae-a4acf44277a5'})  2025-05-31 16:13:06.569813 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8b45f5b5-5599-560e-b955-f5f9e148b85f', 'data_vg': 'ceph-8b45f5b5-5599-560e-b955-f5f9e148b85f'})  2025-05-31 16:13:06.570314 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:06.571380 | orchestrator | 2025-05-31 16:13:06.572013 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-31 16:13:06.574168 | orchestrator | Saturday 31 May 2025 16:13:06 +0000 (0:00:00.289) 0:01:00.629 ********** 2025-05-31 16:13:06.699639 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:06.700775 | orchestrator | 2025-05-31 16:13:06.703154 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-31 16:13:06.703182 | orchestrator | Saturday 31 May 2025 16:13:06 +0000 (0:00:00.132) 0:01:00.761 ********** 2025-05-31 16:13:06.854651 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6a818804-e2a7-5d8b-beae-a4acf44277a5', 'data_vg': 'ceph-6a818804-e2a7-5d8b-beae-a4acf44277a5'})  2025-05-31 16:13:06.854897 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8b45f5b5-5599-560e-b955-f5f9e148b85f', 'data_vg': 'ceph-8b45f5b5-5599-560e-b955-f5f9e148b85f'})  2025-05-31 16:13:06.855338 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:06.856057 | orchestrator | 2025-05-31 16:13:06.856502 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-31 16:13:06.857156 | orchestrator | Saturday 31 May 2025 16:13:06 +0000 (0:00:00.154) 0:01:00.916 ********** 2025-05-31 16:13:06.981122 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:06.983055 | orchestrator | 2025-05-31 16:13:06.983561 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-31 16:13:06.984118 | orchestrator | Saturday 31 May 2025 16:13:06 +0000 (0:00:00.127) 0:01:01.043 ********** 2025-05-31 16:13:07.142623 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6a818804-e2a7-5d8b-beae-a4acf44277a5', 'data_vg': 'ceph-6a818804-e2a7-5d8b-beae-a4acf44277a5'})  2025-05-31 16:13:07.143113 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8b45f5b5-5599-560e-b955-f5f9e148b85f', 'data_vg': 'ceph-8b45f5b5-5599-560e-b955-f5f9e148b85f'})  2025-05-31 16:13:07.148774 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:07.149179 | orchestrator | 2025-05-31 16:13:07.151096 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-31 16:13:07.151566 | orchestrator | Saturday 31 May 2025 16:13:07 +0000 (0:00:00.161) 0:01:01.205 ********** 2025-05-31 16:13:07.281461 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:13:07.281659 | orchestrator | 2025-05-31 16:13:07.282140 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-31 16:13:07.282775 | orchestrator | Saturday 31 May 2025 16:13:07 +0000 (0:00:00.136) 0:01:01.341 ********** 2025-05-31 16:13:07.441088 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6a818804-e2a7-5d8b-beae-a4acf44277a5', 'data_vg': 'ceph-6a818804-e2a7-5d8b-beae-a4acf44277a5'})  2025-05-31 16:13:07.441394 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8b45f5b5-5599-560e-b955-f5f9e148b85f', 'data_vg': 'ceph-8b45f5b5-5599-560e-b955-f5f9e148b85f'})  2025-05-31 16:13:07.442209 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:07.443265 | orchestrator | 2025-05-31 16:13:07.443834 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-31 16:13:07.444380 | orchestrator | Saturday 31 May 2025 16:13:07 +0000 (0:00:00.161) 0:01:01.503 ********** 2025-05-31 16:13:07.602318 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6a818804-e2a7-5d8b-beae-a4acf44277a5', 'data_vg': 'ceph-6a818804-e2a7-5d8b-beae-a4acf44277a5'})  2025-05-31 16:13:07.602837 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8b45f5b5-5599-560e-b955-f5f9e148b85f', 'data_vg': 'ceph-8b45f5b5-5599-560e-b955-f5f9e148b85f'})  2025-05-31 16:13:07.603649 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:07.606474 | orchestrator | 2025-05-31 16:13:07.606517 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-31 16:13:07.606530 | orchestrator | Saturday 31 May 2025 16:13:07 +0000 (0:00:00.161) 0:01:01.664 ********** 2025-05-31 16:13:07.773701 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6a818804-e2a7-5d8b-beae-a4acf44277a5', 'data_vg': 'ceph-6a818804-e2a7-5d8b-beae-a4acf44277a5'})  2025-05-31 16:13:07.773883 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8b45f5b5-5599-560e-b955-f5f9e148b85f', 'data_vg': 'ceph-8b45f5b5-5599-560e-b955-f5f9e148b85f'})  2025-05-31 16:13:07.774907 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:07.775764 | orchestrator | 2025-05-31 16:13:07.776390 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-31 16:13:07.778556 | orchestrator | Saturday 31 May 2025 16:13:07 +0000 (0:00:00.171) 0:01:01.835 ********** 2025-05-31 16:13:07.908081 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:07.908619 | orchestrator | 2025-05-31 16:13:07.909848 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-31 16:13:07.911705 | orchestrator | Saturday 31 May 2025 16:13:07 +0000 (0:00:00.134) 0:01:01.970 ********** 2025-05-31 16:13:08.033009 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:08.035094 | orchestrator | 2025-05-31 16:13:08.035150 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-31 16:13:08.035165 | orchestrator | Saturday 31 May 2025 16:13:08 +0000 (0:00:00.125) 0:01:02.095 ********** 2025-05-31 16:13:08.347135 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:08.347737 | orchestrator | 2025-05-31 16:13:08.348490 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-31 16:13:08.350492 | orchestrator | Saturday 31 May 2025 16:13:08 +0000 (0:00:00.312) 0:01:02.408 ********** 2025-05-31 16:13:08.483532 | orchestrator | ok: [testbed-node-5] => { 2025-05-31 16:13:08.483614 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-31 16:13:08.483928 | orchestrator | } 2025-05-31 16:13:08.485171 | orchestrator | 2025-05-31 16:13:08.486294 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-31 16:13:08.486902 | orchestrator | Saturday 31 May 2025 16:13:08 +0000 (0:00:00.136) 0:01:02.544 ********** 2025-05-31 16:13:08.615315 | orchestrator | ok: [testbed-node-5] => { 2025-05-31 16:13:08.615573 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-31 16:13:08.616582 | orchestrator | } 2025-05-31 16:13:08.617580 | orchestrator | 2025-05-31 16:13:08.619686 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-31 16:13:08.619739 | orchestrator | Saturday 31 May 2025 16:13:08 +0000 (0:00:00.132) 0:01:02.676 ********** 2025-05-31 16:13:08.750421 | orchestrator | ok: [testbed-node-5] => { 2025-05-31 16:13:08.750865 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-31 16:13:08.752282 | orchestrator | } 2025-05-31 16:13:08.753152 | orchestrator | 2025-05-31 16:13:08.754294 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-31 16:13:08.754756 | orchestrator | Saturday 31 May 2025 16:13:08 +0000 (0:00:00.135) 0:01:02.812 ********** 2025-05-31 16:13:09.255583 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:13:09.256016 | orchestrator | 2025-05-31 16:13:09.256046 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-31 16:13:09.256061 | orchestrator | Saturday 31 May 2025 16:13:09 +0000 (0:00:00.505) 0:01:03.317 ********** 2025-05-31 16:13:09.759077 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:13:09.759299 | orchestrator | 2025-05-31 16:13:09.759674 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-31 16:13:09.760217 | orchestrator | Saturday 31 May 2025 16:13:09 +0000 (0:00:00.503) 0:01:03.820 ********** 2025-05-31 16:13:10.303452 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:13:10.303816 | orchestrator | 2025-05-31 16:13:10.304213 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-31 16:13:10.304984 | orchestrator | Saturday 31 May 2025 16:13:10 +0000 (0:00:00.544) 0:01:04.364 ********** 2025-05-31 16:13:10.460315 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:13:10.460541 | orchestrator | 2025-05-31 16:13:10.460558 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-31 16:13:10.460654 | orchestrator | Saturday 31 May 2025 16:13:10 +0000 (0:00:00.156) 0:01:04.521 ********** 2025-05-31 16:13:10.568935 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:10.569154 | orchestrator | 2025-05-31 16:13:10.569993 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-31 16:13:10.570502 | orchestrator | Saturday 31 May 2025 16:13:10 +0000 (0:00:00.109) 0:01:04.631 ********** 2025-05-31 16:13:10.676937 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:10.677215 | orchestrator | 2025-05-31 16:13:10.679512 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-31 16:13:10.679603 | orchestrator | Saturday 31 May 2025 16:13:10 +0000 (0:00:00.108) 0:01:04.739 ********** 2025-05-31 16:13:10.815721 | orchestrator | ok: [testbed-node-5] => { 2025-05-31 16:13:10.816404 | orchestrator |  "vgs_report": { 2025-05-31 16:13:10.816812 | orchestrator |  "vg": [] 2025-05-31 16:13:10.817881 | orchestrator |  } 2025-05-31 16:13:10.819938 | orchestrator | } 2025-05-31 16:13:10.820670 | orchestrator | 2025-05-31 16:13:10.821350 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-31 16:13:10.822501 | orchestrator | Saturday 31 May 2025 16:13:10 +0000 (0:00:00.138) 0:01:04.878 ********** 2025-05-31 16:13:11.105816 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:11.106663 | orchestrator | 2025-05-31 16:13:11.108111 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-31 16:13:11.109013 | orchestrator | Saturday 31 May 2025 16:13:11 +0000 (0:00:00.288) 0:01:05.166 ********** 2025-05-31 16:13:11.260087 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:11.261189 | orchestrator | 2025-05-31 16:13:11.261909 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-31 16:13:11.265044 | orchestrator | Saturday 31 May 2025 16:13:11 +0000 (0:00:00.154) 0:01:05.321 ********** 2025-05-31 16:13:11.394857 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:11.396952 | orchestrator | 2025-05-31 16:13:11.397186 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-31 16:13:11.398372 | orchestrator | Saturday 31 May 2025 16:13:11 +0000 (0:00:00.131) 0:01:05.453 ********** 2025-05-31 16:13:11.559568 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:11.559745 | orchestrator | 2025-05-31 16:13:11.560068 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-31 16:13:11.560325 | orchestrator | Saturday 31 May 2025 16:13:11 +0000 (0:00:00.168) 0:01:05.621 ********** 2025-05-31 16:13:11.684472 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:11.684665 | orchestrator | 2025-05-31 16:13:11.685323 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-31 16:13:11.686138 | orchestrator | Saturday 31 May 2025 16:13:11 +0000 (0:00:00.125) 0:01:05.746 ********** 2025-05-31 16:13:11.821466 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:11.821571 | orchestrator | 2025-05-31 16:13:11.821933 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-31 16:13:11.821956 | orchestrator | Saturday 31 May 2025 16:13:11 +0000 (0:00:00.135) 0:01:05.882 ********** 2025-05-31 16:13:11.959583 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:11.960672 | orchestrator | 2025-05-31 16:13:11.963477 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-31 16:13:11.963523 | orchestrator | Saturday 31 May 2025 16:13:11 +0000 (0:00:00.138) 0:01:06.021 ********** 2025-05-31 16:13:12.105435 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:12.105555 | orchestrator | 2025-05-31 16:13:12.105569 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-31 16:13:12.105582 | orchestrator | Saturday 31 May 2025 16:13:12 +0000 (0:00:00.142) 0:01:06.163 ********** 2025-05-31 16:13:12.235399 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:12.235723 | orchestrator | 2025-05-31 16:13:12.236073 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-31 16:13:12.236504 | orchestrator | Saturday 31 May 2025 16:13:12 +0000 (0:00:00.128) 0:01:06.292 ********** 2025-05-31 16:13:12.363770 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:12.364071 | orchestrator | 2025-05-31 16:13:12.365299 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-31 16:13:12.367682 | orchestrator | Saturday 31 May 2025 16:13:12 +0000 (0:00:00.133) 0:01:06.426 ********** 2025-05-31 16:13:12.502663 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:12.503290 | orchestrator | 2025-05-31 16:13:12.503971 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-31 16:13:12.504547 | orchestrator | Saturday 31 May 2025 16:13:12 +0000 (0:00:00.138) 0:01:06.565 ********** 2025-05-31 16:13:12.660384 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:12.660565 | orchestrator | 2025-05-31 16:13:12.661356 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-31 16:13:12.662360 | orchestrator | Saturday 31 May 2025 16:13:12 +0000 (0:00:00.157) 0:01:06.722 ********** 2025-05-31 16:13:12.973955 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:12.974119 | orchestrator | 2025-05-31 16:13:12.975677 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-31 16:13:12.976319 | orchestrator | Saturday 31 May 2025 16:13:12 +0000 (0:00:00.313) 0:01:07.035 ********** 2025-05-31 16:13:13.116357 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:13.116606 | orchestrator | 2025-05-31 16:13:13.118397 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-31 16:13:13.118909 | orchestrator | Saturday 31 May 2025 16:13:13 +0000 (0:00:00.141) 0:01:07.177 ********** 2025-05-31 16:13:13.277880 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6a818804-e2a7-5d8b-beae-a4acf44277a5', 'data_vg': 'ceph-6a818804-e2a7-5d8b-beae-a4acf44277a5'})  2025-05-31 16:13:13.279372 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8b45f5b5-5599-560e-b955-f5f9e148b85f', 'data_vg': 'ceph-8b45f5b5-5599-560e-b955-f5f9e148b85f'})  2025-05-31 16:13:13.280783 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:13.282166 | orchestrator | 2025-05-31 16:13:13.282967 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-31 16:13:13.283401 | orchestrator | Saturday 31 May 2025 16:13:13 +0000 (0:00:00.160) 0:01:07.338 ********** 2025-05-31 16:13:13.434733 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6a818804-e2a7-5d8b-beae-a4acf44277a5', 'data_vg': 'ceph-6a818804-e2a7-5d8b-beae-a4acf44277a5'})  2025-05-31 16:13:13.434837 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8b45f5b5-5599-560e-b955-f5f9e148b85f', 'data_vg': 'ceph-8b45f5b5-5599-560e-b955-f5f9e148b85f'})  2025-05-31 16:13:13.435359 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:13.438548 | orchestrator | 2025-05-31 16:13:13.438610 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-31 16:13:13.438631 | orchestrator | Saturday 31 May 2025 16:13:13 +0000 (0:00:00.158) 0:01:07.497 ********** 2025-05-31 16:13:13.604832 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6a818804-e2a7-5d8b-beae-a4acf44277a5', 'data_vg': 'ceph-6a818804-e2a7-5d8b-beae-a4acf44277a5'})  2025-05-31 16:13:13.605388 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8b45f5b5-5599-560e-b955-f5f9e148b85f', 'data_vg': 'ceph-8b45f5b5-5599-560e-b955-f5f9e148b85f'})  2025-05-31 16:13:13.605860 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:13.606665 | orchestrator | 2025-05-31 16:13:13.607693 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-31 16:13:13.607802 | orchestrator | Saturday 31 May 2025 16:13:13 +0000 (0:00:00.170) 0:01:07.667 ********** 2025-05-31 16:13:13.774674 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6a818804-e2a7-5d8b-beae-a4acf44277a5', 'data_vg': 'ceph-6a818804-e2a7-5d8b-beae-a4acf44277a5'})  2025-05-31 16:13:13.775341 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8b45f5b5-5599-560e-b955-f5f9e148b85f', 'data_vg': 'ceph-8b45f5b5-5599-560e-b955-f5f9e148b85f'})  2025-05-31 16:13:13.776305 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:13.776904 | orchestrator | 2025-05-31 16:13:13.777389 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-31 16:13:13.777973 | orchestrator | Saturday 31 May 2025 16:13:13 +0000 (0:00:00.169) 0:01:07.836 ********** 2025-05-31 16:13:13.962564 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6a818804-e2a7-5d8b-beae-a4acf44277a5', 'data_vg': 'ceph-6a818804-e2a7-5d8b-beae-a4acf44277a5'})  2025-05-31 16:13:13.962695 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8b45f5b5-5599-560e-b955-f5f9e148b85f', 'data_vg': 'ceph-8b45f5b5-5599-560e-b955-f5f9e148b85f'})  2025-05-31 16:13:13.962798 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:13.963711 | orchestrator | 2025-05-31 16:13:13.964368 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-31 16:13:13.965041 | orchestrator | Saturday 31 May 2025 16:13:13 +0000 (0:00:00.187) 0:01:08.024 ********** 2025-05-31 16:13:14.128095 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6a818804-e2a7-5d8b-beae-a4acf44277a5', 'data_vg': 'ceph-6a818804-e2a7-5d8b-beae-a4acf44277a5'})  2025-05-31 16:13:14.128428 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8b45f5b5-5599-560e-b955-f5f9e148b85f', 'data_vg': 'ceph-8b45f5b5-5599-560e-b955-f5f9e148b85f'})  2025-05-31 16:13:14.129180 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:14.129217 | orchestrator | 2025-05-31 16:13:14.129323 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-31 16:13:14.129455 | orchestrator | Saturday 31 May 2025 16:13:14 +0000 (0:00:00.165) 0:01:08.190 ********** 2025-05-31 16:13:14.300012 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6a818804-e2a7-5d8b-beae-a4acf44277a5', 'data_vg': 'ceph-6a818804-e2a7-5d8b-beae-a4acf44277a5'})  2025-05-31 16:13:14.300455 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8b45f5b5-5599-560e-b955-f5f9e148b85f', 'data_vg': 'ceph-8b45f5b5-5599-560e-b955-f5f9e148b85f'})  2025-05-31 16:13:14.301393 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:14.302215 | orchestrator | 2025-05-31 16:13:14.303059 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-31 16:13:14.303618 | orchestrator | Saturday 31 May 2025 16:13:14 +0000 (0:00:00.170) 0:01:08.360 ********** 2025-05-31 16:13:14.467494 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6a818804-e2a7-5d8b-beae-a4acf44277a5', 'data_vg': 'ceph-6a818804-e2a7-5d8b-beae-a4acf44277a5'})  2025-05-31 16:13:14.468388 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8b45f5b5-5599-560e-b955-f5f9e148b85f', 'data_vg': 'ceph-8b45f5b5-5599-560e-b955-f5f9e148b85f'})  2025-05-31 16:13:14.469069 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:14.471942 | orchestrator | 2025-05-31 16:13:14.472371 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-31 16:13:14.472405 | orchestrator | Saturday 31 May 2025 16:13:14 +0000 (0:00:00.169) 0:01:08.530 ********** 2025-05-31 16:13:15.011060 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:13:15.011325 | orchestrator | 2025-05-31 16:13:15.012074 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-31 16:13:15.012938 | orchestrator | Saturday 31 May 2025 16:13:15 +0000 (0:00:00.542) 0:01:09.072 ********** 2025-05-31 16:13:15.658196 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:13:15.658630 | orchestrator | 2025-05-31 16:13:15.660118 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-31 16:13:15.661194 | orchestrator | Saturday 31 May 2025 16:13:15 +0000 (0:00:00.646) 0:01:09.718 ********** 2025-05-31 16:13:15.830149 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:13:15.830459 | orchestrator | 2025-05-31 16:13:15.831184 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-31 16:13:15.832382 | orchestrator | Saturday 31 May 2025 16:13:15 +0000 (0:00:00.173) 0:01:09.892 ********** 2025-05-31 16:13:16.000849 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-6a818804-e2a7-5d8b-beae-a4acf44277a5', 'vg_name': 'ceph-6a818804-e2a7-5d8b-beae-a4acf44277a5'}) 2025-05-31 16:13:16.001026 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-8b45f5b5-5599-560e-b955-f5f9e148b85f', 'vg_name': 'ceph-8b45f5b5-5599-560e-b955-f5f9e148b85f'}) 2025-05-31 16:13:16.001887 | orchestrator | 2025-05-31 16:13:16.002443 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-31 16:13:16.002989 | orchestrator | Saturday 31 May 2025 16:13:15 +0000 (0:00:00.171) 0:01:10.063 ********** 2025-05-31 16:13:16.172091 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6a818804-e2a7-5d8b-beae-a4acf44277a5', 'data_vg': 'ceph-6a818804-e2a7-5d8b-beae-a4acf44277a5'})  2025-05-31 16:13:16.173112 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8b45f5b5-5599-560e-b955-f5f9e148b85f', 'data_vg': 'ceph-8b45f5b5-5599-560e-b955-f5f9e148b85f'})  2025-05-31 16:13:16.173190 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:16.174000 | orchestrator | 2025-05-31 16:13:16.175265 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-31 16:13:16.175605 | orchestrator | Saturday 31 May 2025 16:13:16 +0000 (0:00:00.170) 0:01:10.234 ********** 2025-05-31 16:13:16.334506 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6a818804-e2a7-5d8b-beae-a4acf44277a5', 'data_vg': 'ceph-6a818804-e2a7-5d8b-beae-a4acf44277a5'})  2025-05-31 16:13:16.334921 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8b45f5b5-5599-560e-b955-f5f9e148b85f', 'data_vg': 'ceph-8b45f5b5-5599-560e-b955-f5f9e148b85f'})  2025-05-31 16:13:16.338270 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:16.338358 | orchestrator | 2025-05-31 16:13:16.340208 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-31 16:13:16.341469 | orchestrator | Saturday 31 May 2025 16:13:16 +0000 (0:00:00.160) 0:01:10.395 ********** 2025-05-31 16:13:16.487174 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6a818804-e2a7-5d8b-beae-a4acf44277a5', 'data_vg': 'ceph-6a818804-e2a7-5d8b-beae-a4acf44277a5'})  2025-05-31 16:13:16.487484 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8b45f5b5-5599-560e-b955-f5f9e148b85f', 'data_vg': 'ceph-8b45f5b5-5599-560e-b955-f5f9e148b85f'})  2025-05-31 16:13:16.487966 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:16.488342 | orchestrator | 2025-05-31 16:13:16.489193 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-31 16:13:16.489316 | orchestrator | Saturday 31 May 2025 16:13:16 +0000 (0:00:00.154) 0:01:10.550 ********** 2025-05-31 16:13:16.884288 | orchestrator | ok: [testbed-node-5] => { 2025-05-31 16:13:16.884432 | orchestrator |  "lvm_report": { 2025-05-31 16:13:16.884834 | orchestrator |  "lv": [ 2025-05-31 16:13:16.885702 | orchestrator |  { 2025-05-31 16:13:16.886307 | orchestrator |  "lv_name": "osd-block-6a818804-e2a7-5d8b-beae-a4acf44277a5", 2025-05-31 16:13:16.887179 | orchestrator |  "vg_name": "ceph-6a818804-e2a7-5d8b-beae-a4acf44277a5" 2025-05-31 16:13:16.888337 | orchestrator |  }, 2025-05-31 16:13:16.888941 | orchestrator |  { 2025-05-31 16:13:16.889522 | orchestrator |  "lv_name": "osd-block-8b45f5b5-5599-560e-b955-f5f9e148b85f", 2025-05-31 16:13:16.891276 | orchestrator |  "vg_name": "ceph-8b45f5b5-5599-560e-b955-f5f9e148b85f" 2025-05-31 16:13:16.891597 | orchestrator |  } 2025-05-31 16:13:16.892077 | orchestrator |  ], 2025-05-31 16:13:16.892784 | orchestrator |  "pv": [ 2025-05-31 16:13:16.893091 | orchestrator |  { 2025-05-31 16:13:16.893787 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-31 16:13:16.894275 | orchestrator |  "vg_name": "ceph-6a818804-e2a7-5d8b-beae-a4acf44277a5" 2025-05-31 16:13:16.894675 | orchestrator |  }, 2025-05-31 16:13:16.895437 | orchestrator |  { 2025-05-31 16:13:16.895919 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-31 16:13:16.896316 | orchestrator |  "vg_name": "ceph-8b45f5b5-5599-560e-b955-f5f9e148b85f" 2025-05-31 16:13:16.897116 | orchestrator |  } 2025-05-31 16:13:16.897413 | orchestrator |  ] 2025-05-31 16:13:16.897961 | orchestrator |  } 2025-05-31 16:13:16.898270 | orchestrator | } 2025-05-31 16:13:16.898760 | orchestrator | 2025-05-31 16:13:16.899144 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:13:16.899625 | orchestrator | 2025-05-31 16:13:16 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-31 16:13:16.900093 | orchestrator | 2025-05-31 16:13:16 | INFO  | Please wait and do not abort execution. 2025-05-31 16:13:16.900899 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-31 16:13:16.901350 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-31 16:13:16.901835 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-31 16:13:16.902380 | orchestrator | 2025-05-31 16:13:16.903032 | orchestrator | 2025-05-31 16:13:16.903414 | orchestrator | 2025-05-31 16:13:16.904037 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 16:13:16.904344 | orchestrator | Saturday 31 May 2025 16:13:16 +0000 (0:00:00.395) 0:01:10.945 ********** 2025-05-31 16:13:16.905270 | orchestrator | =============================================================================== 2025-05-31 16:13:16.905527 | orchestrator | Create block VGs -------------------------------------------------------- 5.89s 2025-05-31 16:13:16.905912 | orchestrator | Create block LVs -------------------------------------------------------- 4.18s 2025-05-31 16:13:16.906293 | orchestrator | Print LVM report data --------------------------------------------------- 1.84s 2025-05-31 16:13:16.906808 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.81s 2025-05-31 16:13:16.907350 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.70s 2025-05-31 16:13:16.907784 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.62s 2025-05-31 16:13:16.908277 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.61s 2025-05-31 16:13:16.908563 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.53s 2025-05-31 16:13:16.909018 | orchestrator | Add known links to the list of available block devices ------------------ 1.43s 2025-05-31 16:13:16.909464 | orchestrator | Add known partitions to the list of available block devices ------------- 1.35s 2025-05-31 16:13:16.909765 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.01s 2025-05-31 16:13:16.910083 | orchestrator | Add known partitions to the list of available block devices ------------- 0.78s 2025-05-31 16:13:16.910472 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2025-05-31 16:13:16.911126 | orchestrator | Create list of VG/LV names ---------------------------------------------- 0.70s 2025-05-31 16:13:16.911818 | orchestrator | Create WAL LVs for ceph_wal_devices ------------------------------------- 0.66s 2025-05-31 16:13:16.912142 | orchestrator | Get initial list of available block devices ----------------------------- 0.65s 2025-05-31 16:13:16.912830 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.65s 2025-05-31 16:13:16.913143 | orchestrator | Fail if DB LV defined in lvm_volumes is missing ------------------------- 0.63s 2025-05-31 16:13:16.913900 | orchestrator | Add known partitions to the list of available block devices ------------- 0.63s 2025-05-31 16:13:16.914379 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.61s 2025-05-31 16:13:18.764153 | orchestrator | 2025-05-31 16:13:18 | INFO  | Task 7dfa1ce8-8891-4072-b6d9-b247d33dccfb (facts) was prepared for execution. 2025-05-31 16:13:18.764383 | orchestrator | 2025-05-31 16:13:18 | INFO  | It takes a moment until task 7dfa1ce8-8891-4072-b6d9-b247d33dccfb (facts) has been started and output is visible here. 2025-05-31 16:13:21.770003 | orchestrator | 2025-05-31 16:13:21.770485 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-31 16:13:21.772095 | orchestrator | 2025-05-31 16:13:21.772124 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-31 16:13:21.773901 | orchestrator | Saturday 31 May 2025 16:13:21 +0000 (0:00:00.189) 0:00:00.189 ********** 2025-05-31 16:13:22.753619 | orchestrator | ok: [testbed-manager] 2025-05-31 16:13:22.754138 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:13:22.754492 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:13:22.755988 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:13:22.756958 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:13:22.757581 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:13:22.758385 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:13:22.759495 | orchestrator | 2025-05-31 16:13:22.760029 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-31 16:13:22.760487 | orchestrator | Saturday 31 May 2025 16:13:22 +0000 (0:00:00.985) 0:00:01.174 ********** 2025-05-31 16:13:22.924998 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:13:23.001037 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:13:23.078150 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:13:23.153810 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:13:23.228455 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:13:23.929370 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:13:23.930453 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:23.931065 | orchestrator | 2025-05-31 16:13:23.932059 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-31 16:13:23.932955 | orchestrator | 2025-05-31 16:13:23.934000 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-31 16:13:23.934527 | orchestrator | Saturday 31 May 2025 16:13:23 +0000 (0:00:01.177) 0:00:02.351 ********** 2025-05-31 16:13:28.346919 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:13:28.347507 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:13:28.348368 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:13:28.349194 | orchestrator | ok: [testbed-manager] 2025-05-31 16:13:28.349788 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:13:28.352700 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:13:28.352734 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:13:28.352746 | orchestrator | 2025-05-31 16:13:28.354598 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-31 16:13:28.355043 | orchestrator | 2025-05-31 16:13:28.355458 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-31 16:13:28.355836 | orchestrator | Saturday 31 May 2025 16:13:28 +0000 (0:00:04.418) 0:00:06.770 ********** 2025-05-31 16:13:28.629332 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:13:28.703470 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:13:28.772812 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:13:28.846998 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:13:28.918683 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:13:28.948544 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:13:28.951000 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:13:28.951706 | orchestrator | 2025-05-31 16:13:28.952470 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:13:28.952534 | orchestrator | 2025-05-31 16:13:28 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-31 16:13:28.952551 | orchestrator | 2025-05-31 16:13:28 | INFO  | Please wait and do not abort execution. 2025-05-31 16:13:28.952977 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 16:13:28.953030 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 16:13:28.953676 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 16:13:28.953704 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 16:13:28.953717 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 16:13:28.953816 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 16:13:28.954118 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 16:13:28.954366 | orchestrator | 2025-05-31 16:13:28.954570 | orchestrator | Saturday 31 May 2025 16:13:28 +0000 (0:00:00.601) 0:00:07.372 ********** 2025-05-31 16:13:28.954923 | orchestrator | =============================================================================== 2025-05-31 16:13:28.955351 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.42s 2025-05-31 16:13:28.955429 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.18s 2025-05-31 16:13:28.955961 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.99s 2025-05-31 16:13:28.956024 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.60s 2025-05-31 16:13:29.423401 | orchestrator | 2025-05-31 16:13:29.426556 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sat May 31 16:13:29 UTC 2025 2025-05-31 16:13:29.426605 | orchestrator | 2025-05-31 16:13:30.799016 | orchestrator | 2025-05-31 16:13:30 | INFO  | Collection nutshell is prepared for execution 2025-05-31 16:13:30.799123 | orchestrator | 2025-05-31 16:13:30 | INFO  | D [0] - dotfiles 2025-05-31 16:13:30.803111 | orchestrator | 2025-05-31 16:13:30 | INFO  | D [0] - homer 2025-05-31 16:13:30.803141 | orchestrator | 2025-05-31 16:13:30 | INFO  | D [0] - netdata 2025-05-31 16:13:30.803154 | orchestrator | 2025-05-31 16:13:30 | INFO  | D [0] - openstackclient 2025-05-31 16:13:30.803165 | orchestrator | 2025-05-31 16:13:30 | INFO  | D [0] - phpmyadmin 2025-05-31 16:13:30.803176 | orchestrator | 2025-05-31 16:13:30 | INFO  | A [0] - common 2025-05-31 16:13:30.804569 | orchestrator | 2025-05-31 16:13:30 | INFO  | A [1] -- loadbalancer 2025-05-31 16:13:30.804624 | orchestrator | 2025-05-31 16:13:30 | INFO  | D [2] --- opensearch 2025-05-31 16:13:30.804690 | orchestrator | 2025-05-31 16:13:30 | INFO  | A [2] --- mariadb-ng 2025-05-31 16:13:30.804705 | orchestrator | 2025-05-31 16:13:30 | INFO  | D [3] ---- horizon 2025-05-31 16:13:30.804744 | orchestrator | 2025-05-31 16:13:30 | INFO  | A [3] ---- keystone 2025-05-31 16:13:30.804757 | orchestrator | 2025-05-31 16:13:30 | INFO  | A [4] ----- neutron 2025-05-31 16:13:30.804818 | orchestrator | 2025-05-31 16:13:30 | INFO  | D [5] ------ wait-for-nova 2025-05-31 16:13:30.804833 | orchestrator | 2025-05-31 16:13:30 | INFO  | A [5] ------ octavia 2025-05-31 16:13:30.805499 | orchestrator | 2025-05-31 16:13:30 | INFO  | D [4] ----- barbican 2025-05-31 16:13:30.805521 | orchestrator | 2025-05-31 16:13:30 | INFO  | D [4] ----- designate 2025-05-31 16:13:30.805534 | orchestrator | 2025-05-31 16:13:30 | INFO  | D [4] ----- ironic 2025-05-31 16:13:30.805546 | orchestrator | 2025-05-31 16:13:30 | INFO  | D [4] ----- placement 2025-05-31 16:13:30.805559 | orchestrator | 2025-05-31 16:13:30 | INFO  | D [4] ----- magnum 2025-05-31 16:13:30.805795 | orchestrator | 2025-05-31 16:13:30 | INFO  | A [1] -- openvswitch 2025-05-31 16:13:30.805847 | orchestrator | 2025-05-31 16:13:30 | INFO  | D [2] --- ovn 2025-05-31 16:13:30.805908 | orchestrator | 2025-05-31 16:13:30 | INFO  | D [1] -- memcached 2025-05-31 16:13:30.805923 | orchestrator | 2025-05-31 16:13:30 | INFO  | D [1] -- redis 2025-05-31 16:13:30.805934 | orchestrator | 2025-05-31 16:13:30 | INFO  | D [1] -- rabbitmq-ng 2025-05-31 16:13:30.806127 | orchestrator | 2025-05-31 16:13:30 | INFO  | A [0] - kubernetes 2025-05-31 16:13:30.806148 | orchestrator | 2025-05-31 16:13:30 | INFO  | D [1] -- kubeconfig 2025-05-31 16:13:30.806260 | orchestrator | 2025-05-31 16:13:30 | INFO  | A [1] -- copy-kubeconfig 2025-05-31 16:13:30.806278 | orchestrator | 2025-05-31 16:13:30 | INFO  | A [0] - ceph 2025-05-31 16:13:30.807658 | orchestrator | 2025-05-31 16:13:30 | INFO  | A [1] -- ceph-pools 2025-05-31 16:13:30.807684 | orchestrator | 2025-05-31 16:13:30 | INFO  | A [2] --- copy-ceph-keys 2025-05-31 16:13:30.807819 | orchestrator | 2025-05-31 16:13:30 | INFO  | A [3] ---- cephclient 2025-05-31 16:13:30.807838 | orchestrator | 2025-05-31 16:13:30 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-05-31 16:13:30.807879 | orchestrator | 2025-05-31 16:13:30 | INFO  | A [4] ----- wait-for-keystone 2025-05-31 16:13:30.807890 | orchestrator | 2025-05-31 16:13:30 | INFO  | D [5] ------ kolla-ceph-rgw 2025-05-31 16:13:30.807928 | orchestrator | 2025-05-31 16:13:30 | INFO  | D [5] ------ glance 2025-05-31 16:13:30.807991 | orchestrator | 2025-05-31 16:13:30 | INFO  | D [5] ------ cinder 2025-05-31 16:13:30.808006 | orchestrator | 2025-05-31 16:13:30 | INFO  | D [5] ------ nova 2025-05-31 16:13:30.808017 | orchestrator | 2025-05-31 16:13:30 | INFO  | A [4] ----- prometheus 2025-05-31 16:13:30.808189 | orchestrator | 2025-05-31 16:13:30 | INFO  | D [5] ------ grafana 2025-05-31 16:13:30.917962 | orchestrator | 2025-05-31 16:13:30 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-05-31 16:13:30.918098 | orchestrator | 2025-05-31 16:13:30 | INFO  | Tasks are running in the background 2025-05-31 16:13:32.643710 | orchestrator | 2025-05-31 16:13:32 | INFO  | No task IDs specified, wait for all currently running tasks 2025-05-31 16:13:34.737568 | orchestrator | 2025-05-31 16:13:34 | INFO  | Task f28ab3b2-77d4-45bf-b15e-d6b533cdf904 is in state STARTED 2025-05-31 16:13:34.740090 | orchestrator | 2025-05-31 16:13:34 | INFO  | Task e30ad5a6-2b41-4fb3-85a7-dcd729cc52d6 is in state STARTED 2025-05-31 16:13:34.740413 | orchestrator | 2025-05-31 16:13:34 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:13:34.740950 | orchestrator | 2025-05-31 16:13:34 | INFO  | Task 64b532e6-17d5-44f4-923e-315927b2c504 is in state STARTED 2025-05-31 16:13:34.744305 | orchestrator | 2025-05-31 16:13:34 | INFO  | Task 447c95eb-d123-4b10-baab-e2f163ca5be5 is in state STARTED 2025-05-31 16:13:34.744612 | orchestrator | 2025-05-31 16:13:34 | INFO  | Task 0d9e82cd-a765-4ceb-9d93-7afb63353659 is in state STARTED 2025-05-31 16:13:34.744688 | orchestrator | 2025-05-31 16:13:34 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:13:37.791747 | orchestrator | 2025-05-31 16:13:37 | INFO  | Task f28ab3b2-77d4-45bf-b15e-d6b533cdf904 is in state STARTED 2025-05-31 16:13:37.792487 | orchestrator | 2025-05-31 16:13:37 | INFO  | Task e30ad5a6-2b41-4fb3-85a7-dcd729cc52d6 is in state STARTED 2025-05-31 16:13:37.792734 | orchestrator | 2025-05-31 16:13:37 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:13:37.793214 | orchestrator | 2025-05-31 16:13:37 | INFO  | Task 64b532e6-17d5-44f4-923e-315927b2c504 is in state STARTED 2025-05-31 16:13:37.793925 | orchestrator | 2025-05-31 16:13:37 | INFO  | Task 447c95eb-d123-4b10-baab-e2f163ca5be5 is in state STARTED 2025-05-31 16:13:37.797616 | orchestrator | 2025-05-31 16:13:37 | INFO  | Task 0d9e82cd-a765-4ceb-9d93-7afb63353659 is in state STARTED 2025-05-31 16:13:37.797692 | orchestrator | 2025-05-31 16:13:37 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:13:40.840086 | orchestrator | 2025-05-31 16:13:40 | INFO  | Task f28ab3b2-77d4-45bf-b15e-d6b533cdf904 is in state STARTED 2025-05-31 16:13:40.842468 | orchestrator | 2025-05-31 16:13:40 | INFO  | Task e30ad5a6-2b41-4fb3-85a7-dcd729cc52d6 is in state STARTED 2025-05-31 16:13:40.842932 | orchestrator | 2025-05-31 16:13:40 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:13:40.843418 | orchestrator | 2025-05-31 16:13:40 | INFO  | Task 64b532e6-17d5-44f4-923e-315927b2c504 is in state STARTED 2025-05-31 16:13:40.843984 | orchestrator | 2025-05-31 16:13:40 | INFO  | Task 447c95eb-d123-4b10-baab-e2f163ca5be5 is in state STARTED 2025-05-31 16:13:40.844448 | orchestrator | 2025-05-31 16:13:40 | INFO  | Task 0d9e82cd-a765-4ceb-9d93-7afb63353659 is in state STARTED 2025-05-31 16:13:40.844674 | orchestrator | 2025-05-31 16:13:40 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:13:43.895152 | orchestrator | 2025-05-31 16:13:43 | INFO  | Task f28ab3b2-77d4-45bf-b15e-d6b533cdf904 is in state STARTED 2025-05-31 16:13:43.901149 | orchestrator | 2025-05-31 16:13:43 | INFO  | Task e30ad5a6-2b41-4fb3-85a7-dcd729cc52d6 is in state STARTED 2025-05-31 16:13:43.901190 | orchestrator | 2025-05-31 16:13:43 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:13:43.901202 | orchestrator | 2025-05-31 16:13:43 | INFO  | Task 64b532e6-17d5-44f4-923e-315927b2c504 is in state STARTED 2025-05-31 16:13:43.901563 | orchestrator | 2025-05-31 16:13:43 | INFO  | Task 447c95eb-d123-4b10-baab-e2f163ca5be5 is in state STARTED 2025-05-31 16:13:43.902307 | orchestrator | 2025-05-31 16:13:43 | INFO  | Task 0d9e82cd-a765-4ceb-9d93-7afb63353659 is in state STARTED 2025-05-31 16:13:43.904935 | orchestrator | 2025-05-31 16:13:43 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:13:46.970540 | orchestrator | 2025-05-31 16:13:46 | INFO  | Task f28ab3b2-77d4-45bf-b15e-d6b533cdf904 is in state STARTED 2025-05-31 16:13:46.970640 | orchestrator | 2025-05-31 16:13:46 | INFO  | Task e30ad5a6-2b41-4fb3-85a7-dcd729cc52d6 is in state STARTED 2025-05-31 16:13:46.970967 | orchestrator | 2025-05-31 16:13:46 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:13:46.972153 | orchestrator | 2025-05-31 16:13:46 | INFO  | Task 64b532e6-17d5-44f4-923e-315927b2c504 is in state STARTED 2025-05-31 16:13:46.972180 | orchestrator | 2025-05-31 16:13:46 | INFO  | Task 447c95eb-d123-4b10-baab-e2f163ca5be5 is in state STARTED 2025-05-31 16:13:46.972645 | orchestrator | 2025-05-31 16:13:46 | INFO  | Task 0d9e82cd-a765-4ceb-9d93-7afb63353659 is in state STARTED 2025-05-31 16:13:46.972665 | orchestrator | 2025-05-31 16:13:46 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:13:50.006297 | orchestrator | 2025-05-31 16:13:50 | INFO  | Task f28ab3b2-77d4-45bf-b15e-d6b533cdf904 is in state STARTED 2025-05-31 16:13:50.007596 | orchestrator | 2025-05-31 16:13:50 | INFO  | Task edb89b7d-3cd0-49c4-81f0-307958583a45 is in state STARTED 2025-05-31 16:13:50.009008 | orchestrator | 2025-05-31 16:13:50 | INFO  | Task e30ad5a6-2b41-4fb3-85a7-dcd729cc52d6 is in state SUCCESS 2025-05-31 16:13:50.012008 | orchestrator | 2025-05-31 16:13:50.012061 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-05-31 16:13:50.012074 | orchestrator | 2025-05-31 16:13:50.012085 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-05-31 16:13:50.012097 | orchestrator | Saturday 31 May 2025 16:13:37 +0000 (0:00:00.226) 0:00:00.226 ********** 2025-05-31 16:13:50.012108 | orchestrator | changed: [testbed-manager] 2025-05-31 16:13:50.012119 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:13:50.012130 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:13:50.012141 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:13:50.012151 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:13:50.012161 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:13:50.012172 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:13:50.012183 | orchestrator | 2025-05-31 16:13:50.012193 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-05-31 16:13:50.012204 | orchestrator | Saturday 31 May 2025 16:13:41 +0000 (0:00:03.764) 0:00:03.991 ********** 2025-05-31 16:13:50.012215 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-05-31 16:13:50.012226 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-05-31 16:13:50.012237 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-05-31 16:13:50.012269 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-05-31 16:13:50.012280 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-05-31 16:13:50.012291 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-05-31 16:13:50.012302 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-05-31 16:13:50.012335 | orchestrator | 2025-05-31 16:13:50.012347 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-05-31 16:13:50.012358 | orchestrator | Saturday 31 May 2025 16:13:43 +0000 (0:00:02.053) 0:00:06.044 ********** 2025-05-31 16:13:50.012373 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-31 16:13:42.344774', 'end': '2025-05-31 16:13:42.350718', 'delta': '0:00:00.005944', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-31 16:13:50.012394 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-31 16:13:42.206709', 'end': '2025-05-31 16:13:42.212476', 'delta': '0:00:00.005767', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-31 16:13:50.012441 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-31 16:13:42.456654', 'end': '2025-05-31 16:13:42.463571', 'delta': '0:00:00.006917', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-31 16:13:50.012500 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-31 16:13:42.562674', 'end': '2025-05-31 16:13:42.568492', 'delta': '0:00:00.005818', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-31 16:13:50.012514 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-31 16:13:42.817222', 'end': '2025-05-31 16:13:42.824945', 'delta': '0:00:00.007723', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-31 16:13:50.012534 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-31 16:13:43.106290', 'end': '2025-05-31 16:13:43.111949', 'delta': '0:00:00.005659', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-31 16:13:50.012546 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-31 16:13:43.351512', 'end': '2025-05-31 16:13:43.358493', 'delta': '0:00:00.006981', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-31 16:13:50.012557 | orchestrator | 2025-05-31 16:13:50.012575 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-05-31 16:13:50.012588 | orchestrator | Saturday 31 May 2025 16:13:45 +0000 (0:00:01.957) 0:00:08.002 ********** 2025-05-31 16:13:50.012601 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-05-31 16:13:50.012614 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-05-31 16:13:50.012626 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-05-31 16:13:50.012639 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-05-31 16:13:50.012651 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-05-31 16:13:50.012664 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-05-31 16:13:50.012676 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-05-31 16:13:50.012688 | orchestrator | 2025-05-31 16:13:50.012701 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:13:50.012713 | orchestrator | testbed-manager : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:13:50.012727 | orchestrator | testbed-node-0 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:13:50.012740 | orchestrator | testbed-node-1 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:13:50.012760 | orchestrator | testbed-node-2 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:13:50.012773 | orchestrator | testbed-node-3 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:13:50.012786 | orchestrator | testbed-node-4 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:13:50.012804 | orchestrator | testbed-node-5 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:13:50.012817 | orchestrator | 2025-05-31 16:13:50.012828 | orchestrator | Saturday 31 May 2025 16:13:47 +0000 (0:00:02.341) 0:00:10.343 ********** 2025-05-31 16:13:50.012839 | orchestrator | =============================================================================== 2025-05-31 16:13:50.012850 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.76s 2025-05-31 16:13:50.012860 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.34s 2025-05-31 16:13:50.012871 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.05s 2025-05-31 16:13:50.012882 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.96s 2025-05-31 16:13:50.012987 | orchestrator | 2025-05-31 16:13:50 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:13:50.013004 | orchestrator | 2025-05-31 16:13:50 | INFO  | Task 64b532e6-17d5-44f4-923e-315927b2c504 is in state STARTED 2025-05-31 16:13:50.013015 | orchestrator | 2025-05-31 16:13:50 | INFO  | Task 447c95eb-d123-4b10-baab-e2f163ca5be5 is in state STARTED 2025-05-31 16:13:50.013340 | orchestrator | 2025-05-31 16:13:50 | INFO  | Task 0d9e82cd-a765-4ceb-9d93-7afb63353659 is in state STARTED 2025-05-31 16:13:50.014053 | orchestrator | 2025-05-31 16:13:50 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:13:53.059898 | orchestrator | 2025-05-31 16:13:53 | INFO  | Task f28ab3b2-77d4-45bf-b15e-d6b533cdf904 is in state STARTED 2025-05-31 16:13:53.063019 | orchestrator | 2025-05-31 16:13:53 | INFO  | Task edb89b7d-3cd0-49c4-81f0-307958583a45 is in state STARTED 2025-05-31 16:13:53.065406 | orchestrator | 2025-05-31 16:13:53 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:13:53.069918 | orchestrator | 2025-05-31 16:13:53 | INFO  | Task 64b532e6-17d5-44f4-923e-315927b2c504 is in state STARTED 2025-05-31 16:13:53.076891 | orchestrator | 2025-05-31 16:13:53 | INFO  | Task 447c95eb-d123-4b10-baab-e2f163ca5be5 is in state STARTED 2025-05-31 16:13:53.076905 | orchestrator | 2025-05-31 16:13:53 | INFO  | Task 0d9e82cd-a765-4ceb-9d93-7afb63353659 is in state STARTED 2025-05-31 16:13:53.076910 | orchestrator | 2025-05-31 16:13:53 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:13:56.125647 | orchestrator | 2025-05-31 16:13:56 | INFO  | Task f28ab3b2-77d4-45bf-b15e-d6b533cdf904 is in state STARTED 2025-05-31 16:13:56.126293 | orchestrator | 2025-05-31 16:13:56 | INFO  | Task edb89b7d-3cd0-49c4-81f0-307958583a45 is in state STARTED 2025-05-31 16:13:56.127516 | orchestrator | 2025-05-31 16:13:56 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:13:56.130487 | orchestrator | 2025-05-31 16:13:56 | INFO  | Task 64b532e6-17d5-44f4-923e-315927b2c504 is in state STARTED 2025-05-31 16:13:56.131130 | orchestrator | 2025-05-31 16:13:56 | INFO  | Task 447c95eb-d123-4b10-baab-e2f163ca5be5 is in state STARTED 2025-05-31 16:13:56.132296 | orchestrator | 2025-05-31 16:13:56 | INFO  | Task 0d9e82cd-a765-4ceb-9d93-7afb63353659 is in state STARTED 2025-05-31 16:13:56.132318 | orchestrator | 2025-05-31 16:13:56 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:13:59.172938 | orchestrator | 2025-05-31 16:13:59 | INFO  | Task f28ab3b2-77d4-45bf-b15e-d6b533cdf904 is in state STARTED 2025-05-31 16:13:59.175645 | orchestrator | 2025-05-31 16:13:59 | INFO  | Task edb89b7d-3cd0-49c4-81f0-307958583a45 is in state STARTED 2025-05-31 16:13:59.176145 | orchestrator | 2025-05-31 16:13:59 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:13:59.179470 | orchestrator | 2025-05-31 16:13:59 | INFO  | Task 64b532e6-17d5-44f4-923e-315927b2c504 is in state STARTED 2025-05-31 16:13:59.180017 | orchestrator | 2025-05-31 16:13:59 | INFO  | Task 447c95eb-d123-4b10-baab-e2f163ca5be5 is in state STARTED 2025-05-31 16:13:59.188059 | orchestrator | 2025-05-31 16:13:59 | INFO  | Task 0d9e82cd-a765-4ceb-9d93-7afb63353659 is in state STARTED 2025-05-31 16:13:59.188083 | orchestrator | 2025-05-31 16:13:59 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:14:02.224372 | orchestrator | 2025-05-31 16:14:02 | INFO  | Task f28ab3b2-77d4-45bf-b15e-d6b533cdf904 is in state STARTED 2025-05-31 16:14:02.225744 | orchestrator | 2025-05-31 16:14:02 | INFO  | Task edb89b7d-3cd0-49c4-81f0-307958583a45 is in state STARTED 2025-05-31 16:14:02.225785 | orchestrator | 2025-05-31 16:14:02 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:14:02.226901 | orchestrator | 2025-05-31 16:14:02 | INFO  | Task 64b532e6-17d5-44f4-923e-315927b2c504 is in state STARTED 2025-05-31 16:14:02.228366 | orchestrator | 2025-05-31 16:14:02 | INFO  | Task 447c95eb-d123-4b10-baab-e2f163ca5be5 is in state STARTED 2025-05-31 16:14:02.233128 | orchestrator | 2025-05-31 16:14:02 | INFO  | Task 0d9e82cd-a765-4ceb-9d93-7afb63353659 is in state STARTED 2025-05-31 16:14:02.233161 | orchestrator | 2025-05-31 16:14:02 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:14:05.289422 | orchestrator | 2025-05-31 16:14:05 | INFO  | Task f28ab3b2-77d4-45bf-b15e-d6b533cdf904 is in state STARTED 2025-05-31 16:14:05.289525 | orchestrator | 2025-05-31 16:14:05 | INFO  | Task edb89b7d-3cd0-49c4-81f0-307958583a45 is in state STARTED 2025-05-31 16:14:05.290399 | orchestrator | 2025-05-31 16:14:05 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:14:05.291813 | orchestrator | 2025-05-31 16:14:05 | INFO  | Task 64b532e6-17d5-44f4-923e-315927b2c504 is in state STARTED 2025-05-31 16:14:05.293067 | orchestrator | 2025-05-31 16:14:05 | INFO  | Task 447c95eb-d123-4b10-baab-e2f163ca5be5 is in state STARTED 2025-05-31 16:14:05.294400 | orchestrator | 2025-05-31 16:14:05 | INFO  | Task 0d9e82cd-a765-4ceb-9d93-7afb63353659 is in state STARTED 2025-05-31 16:14:05.294436 | orchestrator | 2025-05-31 16:14:05 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:14:08.356506 | orchestrator | 2025-05-31 16:14:08 | INFO  | Task f28ab3b2-77d4-45bf-b15e-d6b533cdf904 is in state STARTED 2025-05-31 16:14:08.356735 | orchestrator | 2025-05-31 16:14:08 | INFO  | Task edb89b7d-3cd0-49c4-81f0-307958583a45 is in state STARTED 2025-05-31 16:14:08.359093 | orchestrator | 2025-05-31 16:14:08 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:14:08.359951 | orchestrator | 2025-05-31 16:14:08 | INFO  | Task 64b532e6-17d5-44f4-923e-315927b2c504 is in state STARTED 2025-05-31 16:14:08.361430 | orchestrator | 2025-05-31 16:14:08 | INFO  | Task 447c95eb-d123-4b10-baab-e2f163ca5be5 is in state STARTED 2025-05-31 16:14:08.364012 | orchestrator | 2025-05-31 16:14:08 | INFO  | Task 0d9e82cd-a765-4ceb-9d93-7afb63353659 is in state STARTED 2025-05-31 16:14:08.364041 | orchestrator | 2025-05-31 16:14:08 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:14:11.422465 | orchestrator | 2025-05-31 16:14:11 | INFO  | Task f28ab3b2-77d4-45bf-b15e-d6b533cdf904 is in state STARTED 2025-05-31 16:14:11.422535 | orchestrator | 2025-05-31 16:14:11 | INFO  | Task edb89b7d-3cd0-49c4-81f0-307958583a45 is in state STARTED 2025-05-31 16:14:11.424048 | orchestrator | 2025-05-31 16:14:11 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:14:11.426077 | orchestrator | 2025-05-31 16:14:11 | INFO  | Task 64b532e6-17d5-44f4-923e-315927b2c504 is in state STARTED 2025-05-31 16:14:11.427050 | orchestrator | 2025-05-31 16:14:11 | INFO  | Task 447c95eb-d123-4b10-baab-e2f163ca5be5 is in state SUCCESS 2025-05-31 16:14:11.430830 | orchestrator | 2025-05-31 16:14:11 | INFO  | Task 0d9e82cd-a765-4ceb-9d93-7afb63353659 is in state STARTED 2025-05-31 16:14:11.430858 | orchestrator | 2025-05-31 16:14:11 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:14:14.499050 | orchestrator | 2025-05-31 16:14:14 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:14:14.499917 | orchestrator | 2025-05-31 16:14:14 | INFO  | Task f28ab3b2-77d4-45bf-b15e-d6b533cdf904 is in state STARTED 2025-05-31 16:14:14.500224 | orchestrator | 2025-05-31 16:14:14 | INFO  | Task edb89b7d-3cd0-49c4-81f0-307958583a45 is in state STARTED 2025-05-31 16:14:14.501206 | orchestrator | 2025-05-31 16:14:14 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:14:14.504479 | orchestrator | 2025-05-31 16:14:14 | INFO  | Task 64b532e6-17d5-44f4-923e-315927b2c504 is in state STARTED 2025-05-31 16:14:14.505053 | orchestrator | 2025-05-31 16:14:14 | INFO  | Task 0d9e82cd-a765-4ceb-9d93-7afb63353659 is in state STARTED 2025-05-31 16:14:14.505123 | orchestrator | 2025-05-31 16:14:14 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:14:17.538487 | orchestrator | 2025-05-31 16:14:17 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:14:17.539320 | orchestrator | 2025-05-31 16:14:17 | INFO  | Task f28ab3b2-77d4-45bf-b15e-d6b533cdf904 is in state STARTED 2025-05-31 16:14:17.539810 | orchestrator | 2025-05-31 16:14:17 | INFO  | Task edb89b7d-3cd0-49c4-81f0-307958583a45 is in state STARTED 2025-05-31 16:14:17.540408 | orchestrator | 2025-05-31 16:14:17 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:14:17.542631 | orchestrator | 2025-05-31 16:14:17 | INFO  | Task 64b532e6-17d5-44f4-923e-315927b2c504 is in state STARTED 2025-05-31 16:14:17.543282 | orchestrator | 2025-05-31 16:14:17 | INFO  | Task 0d9e82cd-a765-4ceb-9d93-7afb63353659 is in state STARTED 2025-05-31 16:14:17.544038 | orchestrator | 2025-05-31 16:14:17 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:14:20.597912 | orchestrator | 2025-05-31 16:14:20 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:14:20.600903 | orchestrator | 2025-05-31 16:14:20 | INFO  | Task f28ab3b2-77d4-45bf-b15e-d6b533cdf904 is in state STARTED 2025-05-31 16:14:20.601212 | orchestrator | 2025-05-31 16:14:20 | INFO  | Task edb89b7d-3cd0-49c4-81f0-307958583a45 is in state STARTED 2025-05-31 16:14:20.602078 | orchestrator | 2025-05-31 16:14:20 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:14:20.603780 | orchestrator | 2025-05-31 16:14:20 | INFO  | Task 64b532e6-17d5-44f4-923e-315927b2c504 is in state STARTED 2025-05-31 16:14:20.604794 | orchestrator | 2025-05-31 16:14:20 | INFO  | Task 0d9e82cd-a765-4ceb-9d93-7afb63353659 is in state STARTED 2025-05-31 16:14:20.606400 | orchestrator | 2025-05-31 16:14:20 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:14:23.671517 | orchestrator | 2025-05-31 16:14:23 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:14:23.671676 | orchestrator | 2025-05-31 16:14:23 | INFO  | Task f28ab3b2-77d4-45bf-b15e-d6b533cdf904 is in state STARTED 2025-05-31 16:14:23.671964 | orchestrator | 2025-05-31 16:14:23 | INFO  | Task edb89b7d-3cd0-49c4-81f0-307958583a45 is in state STARTED 2025-05-31 16:14:23.675031 | orchestrator | 2025-05-31 16:14:23 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:14:23.677315 | orchestrator | 2025-05-31 16:14:23 | INFO  | Task 64b532e6-17d5-44f4-923e-315927b2c504 is in state STARTED 2025-05-31 16:14:23.678939 | orchestrator | 2025-05-31 16:14:23 | INFO  | Task 0d9e82cd-a765-4ceb-9d93-7afb63353659 is in state STARTED 2025-05-31 16:14:23.678963 | orchestrator | 2025-05-31 16:14:23 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:14:26.731077 | orchestrator | 2025-05-31 16:14:26 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:14:26.731166 | orchestrator | 2025-05-31 16:14:26 | INFO  | Task f28ab3b2-77d4-45bf-b15e-d6b533cdf904 is in state SUCCESS 2025-05-31 16:14:26.731182 | orchestrator | 2025-05-31 16:14:26 | INFO  | Task edb89b7d-3cd0-49c4-81f0-307958583a45 is in state STARTED 2025-05-31 16:14:26.731193 | orchestrator | 2025-05-31 16:14:26 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:14:26.731204 | orchestrator | 2025-05-31 16:14:26 | INFO  | Task 64b532e6-17d5-44f4-923e-315927b2c504 is in state STARTED 2025-05-31 16:14:26.731215 | orchestrator | 2025-05-31 16:14:26 | INFO  | Task 0d9e82cd-a765-4ceb-9d93-7afb63353659 is in state STARTED 2025-05-31 16:14:26.731225 | orchestrator | 2025-05-31 16:14:26 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:14:29.771168 | orchestrator | 2025-05-31 16:14:29 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:14:29.771357 | orchestrator | 2025-05-31 16:14:29 | INFO  | Task edb89b7d-3cd0-49c4-81f0-307958583a45 is in state STARTED 2025-05-31 16:14:29.774868 | orchestrator | 2025-05-31 16:14:29 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:14:29.775367 | orchestrator | 2025-05-31 16:14:29 | INFO  | Task 64b532e6-17d5-44f4-923e-315927b2c504 is in state STARTED 2025-05-31 16:14:29.776022 | orchestrator | 2025-05-31 16:14:29 | INFO  | Task 0d9e82cd-a765-4ceb-9d93-7afb63353659 is in state STARTED 2025-05-31 16:14:29.776042 | orchestrator | 2025-05-31 16:14:29 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:14:32.816945 | orchestrator | 2025-05-31 16:14:32 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:14:32.817903 | orchestrator | 2025-05-31 16:14:32 | INFO  | Task edb89b7d-3cd0-49c4-81f0-307958583a45 is in state STARTED 2025-05-31 16:14:32.821556 | orchestrator | 2025-05-31 16:14:32 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:14:32.823451 | orchestrator | 2025-05-31 16:14:32 | INFO  | Task 64b532e6-17d5-44f4-923e-315927b2c504 is in state STARTED 2025-05-31 16:14:32.825326 | orchestrator | 2025-05-31 16:14:32 | INFO  | Task 0d9e82cd-a765-4ceb-9d93-7afb63353659 is in state STARTED 2025-05-31 16:14:32.825396 | orchestrator | 2025-05-31 16:14:32 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:14:35.864864 | orchestrator | 2025-05-31 16:14:35 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:14:35.865022 | orchestrator | 2025-05-31 16:14:35 | INFO  | Task edb89b7d-3cd0-49c4-81f0-307958583a45 is in state STARTED 2025-05-31 16:14:35.866560 | orchestrator | 2025-05-31 16:14:35 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:14:35.867606 | orchestrator | 2025-05-31 16:14:35 | INFO  | Task 64b532e6-17d5-44f4-923e-315927b2c504 is in state STARTED 2025-05-31 16:14:35.869945 | orchestrator | 2025-05-31 16:14:35 | INFO  | Task 0d9e82cd-a765-4ceb-9d93-7afb63353659 is in state STARTED 2025-05-31 16:14:35.869994 | orchestrator | 2025-05-31 16:14:35 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:14:38.913165 | orchestrator | 2025-05-31 16:14:38 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:14:38.913795 | orchestrator | 2025-05-31 16:14:38 | INFO  | Task edb89b7d-3cd0-49c4-81f0-307958583a45 is in state STARTED 2025-05-31 16:14:38.919862 | orchestrator | 2025-05-31 16:14:38.919936 | orchestrator | 2025-05-31 16:14:38.919950 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-05-31 16:14:38.919963 | orchestrator | 2025-05-31 16:14:38.919974 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-05-31 16:14:38.919986 | orchestrator | Saturday 31 May 2025 16:13:37 +0000 (0:00:00.272) 0:00:00.272 ********** 2025-05-31 16:14:38.919998 | orchestrator | ok: [testbed-manager] => { 2025-05-31 16:14:38.920010 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-05-31 16:14:38.920022 | orchestrator | } 2025-05-31 16:14:38.920033 | orchestrator | 2025-05-31 16:14:38.920044 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-05-31 16:14:38.920054 | orchestrator | Saturday 31 May 2025 16:13:37 +0000 (0:00:00.206) 0:00:00.478 ********** 2025-05-31 16:14:38.920065 | orchestrator | ok: [testbed-manager] 2025-05-31 16:14:38.920076 | orchestrator | 2025-05-31 16:14:38.920087 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-05-31 16:14:38.920098 | orchestrator | Saturday 31 May 2025 16:13:38 +0000 (0:00:01.099) 0:00:01.578 ********** 2025-05-31 16:14:38.920109 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-05-31 16:14:38.920119 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-05-31 16:14:38.920130 | orchestrator | 2025-05-31 16:14:38.920141 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-05-31 16:14:38.920152 | orchestrator | Saturday 31 May 2025 16:13:39 +0000 (0:00:01.095) 0:00:02.674 ********** 2025-05-31 16:14:38.920162 | orchestrator | changed: [testbed-manager] 2025-05-31 16:14:38.920173 | orchestrator | 2025-05-31 16:14:38.920183 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-05-31 16:14:38.920194 | orchestrator | Saturday 31 May 2025 16:13:41 +0000 (0:00:01.734) 0:00:04.408 ********** 2025-05-31 16:14:38.920205 | orchestrator | changed: [testbed-manager] 2025-05-31 16:14:38.920216 | orchestrator | 2025-05-31 16:14:38.920226 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-05-31 16:14:38.920237 | orchestrator | Saturday 31 May 2025 16:13:43 +0000 (0:00:01.448) 0:00:05.857 ********** 2025-05-31 16:14:38.920248 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-05-31 16:14:38.920258 | orchestrator | ok: [testbed-manager] 2025-05-31 16:14:38.920297 | orchestrator | 2025-05-31 16:14:38.920309 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-05-31 16:14:38.920320 | orchestrator | Saturday 31 May 2025 16:14:08 +0000 (0:00:25.075) 0:00:30.933 ********** 2025-05-31 16:14:38.920331 | orchestrator | changed: [testbed-manager] 2025-05-31 16:14:38.920342 | orchestrator | 2025-05-31 16:14:38.920352 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:14:38.920363 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:14:38.920392 | orchestrator | 2025-05-31 16:14:38.920403 | orchestrator | Saturday 31 May 2025 16:14:10 +0000 (0:00:02.655) 0:00:33.588 ********** 2025-05-31 16:14:38.920414 | orchestrator | =============================================================================== 2025-05-31 16:14:38.920425 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 25.08s 2025-05-31 16:14:38.920435 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.66s 2025-05-31 16:14:38.920466 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 1.73s 2025-05-31 16:14:38.920479 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.45s 2025-05-31 16:14:38.920491 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.10s 2025-05-31 16:14:38.920503 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.10s 2025-05-31 16:14:38.920514 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.21s 2025-05-31 16:14:38.920526 | orchestrator | 2025-05-31 16:14:38.920538 | orchestrator | 2025-05-31 16:14:38.920550 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-05-31 16:14:38.920562 | orchestrator | 2025-05-31 16:14:38.920574 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-05-31 16:14:38.920586 | orchestrator | Saturday 31 May 2025 16:13:37 +0000 (0:00:00.318) 0:00:00.318 ********** 2025-05-31 16:14:38.920597 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-05-31 16:14:38.920610 | orchestrator | 2025-05-31 16:14:38.920628 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-05-31 16:14:38.920641 | orchestrator | Saturday 31 May 2025 16:13:37 +0000 (0:00:00.303) 0:00:00.621 ********** 2025-05-31 16:14:38.920653 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-05-31 16:14:38.920665 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-05-31 16:14:38.920677 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-05-31 16:14:38.920689 | orchestrator | 2025-05-31 16:14:38.920701 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-05-31 16:14:38.920713 | orchestrator | Saturday 31 May 2025 16:13:39 +0000 (0:00:01.351) 0:00:01.972 ********** 2025-05-31 16:14:38.920725 | orchestrator | changed: [testbed-manager] 2025-05-31 16:14:38.920736 | orchestrator | 2025-05-31 16:14:38.920748 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-05-31 16:14:38.920760 | orchestrator | Saturday 31 May 2025 16:13:40 +0000 (0:00:01.769) 0:00:03.742 ********** 2025-05-31 16:14:38.920772 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-05-31 16:14:38.920784 | orchestrator | ok: [testbed-manager] 2025-05-31 16:14:38.920796 | orchestrator | 2025-05-31 16:14:38.920821 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-05-31 16:14:38.920833 | orchestrator | Saturday 31 May 2025 16:14:17 +0000 (0:00:36.752) 0:00:40.495 ********** 2025-05-31 16:14:38.920843 | orchestrator | changed: [testbed-manager] 2025-05-31 16:14:38.920854 | orchestrator | 2025-05-31 16:14:38.920865 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-05-31 16:14:38.920876 | orchestrator | Saturday 31 May 2025 16:14:19 +0000 (0:00:01.906) 0:00:42.401 ********** 2025-05-31 16:14:38.920886 | orchestrator | ok: [testbed-manager] 2025-05-31 16:14:38.920897 | orchestrator | 2025-05-31 16:14:38.920908 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-05-31 16:14:38.920918 | orchestrator | Saturday 31 May 2025 16:14:20 +0000 (0:00:01.025) 0:00:43.427 ********** 2025-05-31 16:14:38.920929 | orchestrator | changed: [testbed-manager] 2025-05-31 16:14:38.920939 | orchestrator | 2025-05-31 16:14:38.920950 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-05-31 16:14:38.920961 | orchestrator | Saturday 31 May 2025 16:14:22 +0000 (0:00:02.261) 0:00:45.688 ********** 2025-05-31 16:14:38.920971 | orchestrator | changed: [testbed-manager] 2025-05-31 16:14:38.920982 | orchestrator | 2025-05-31 16:14:38.920992 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-05-31 16:14:38.921003 | orchestrator | Saturday 31 May 2025 16:14:23 +0000 (0:00:00.806) 0:00:46.494 ********** 2025-05-31 16:14:38.921014 | orchestrator | changed: [testbed-manager] 2025-05-31 16:14:38.921030 | orchestrator | 2025-05-31 16:14:38.921041 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-05-31 16:14:38.921052 | orchestrator | Saturday 31 May 2025 16:14:24 +0000 (0:00:00.569) 0:00:47.064 ********** 2025-05-31 16:14:38.921062 | orchestrator | ok: [testbed-manager] 2025-05-31 16:14:38.921073 | orchestrator | 2025-05-31 16:14:38.921084 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:14:38.921094 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:14:38.921105 | orchestrator | 2025-05-31 16:14:38.921116 | orchestrator | Saturday 31 May 2025 16:14:24 +0000 (0:00:00.373) 0:00:47.437 ********** 2025-05-31 16:14:38.921126 | orchestrator | =============================================================================== 2025-05-31 16:14:38.921137 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 36.75s 2025-05-31 16:14:38.921147 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.26s 2025-05-31 16:14:38.921158 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.91s 2025-05-31 16:14:38.921168 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.76s 2025-05-31 16:14:38.921179 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.36s 2025-05-31 16:14:38.921190 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.03s 2025-05-31 16:14:38.921200 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.81s 2025-05-31 16:14:38.921211 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.57s 2025-05-31 16:14:38.921221 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.37s 2025-05-31 16:14:38.921232 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.30s 2025-05-31 16:14:38.921243 | orchestrator | 2025-05-31 16:14:38.921253 | orchestrator | 2025-05-31 16:14:38.921264 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-31 16:14:38.921295 | orchestrator | 2025-05-31 16:14:38.921306 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-31 16:14:38.921317 | orchestrator | Saturday 31 May 2025 16:13:37 +0000 (0:00:00.244) 0:00:00.244 ********** 2025-05-31 16:14:38.921327 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-05-31 16:14:38.921338 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-05-31 16:14:38.921348 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-05-31 16:14:38.921359 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-05-31 16:14:38.921370 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-05-31 16:14:38.921380 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-05-31 16:14:38.921391 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-05-31 16:14:38.921401 | orchestrator | 2025-05-31 16:14:38.921412 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-05-31 16:14:38.921422 | orchestrator | 2025-05-31 16:14:38.921438 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-05-31 16:14:38.921448 | orchestrator | Saturday 31 May 2025 16:13:39 +0000 (0:00:01.588) 0:00:01.833 ********** 2025-05-31 16:14:38.921471 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:14:38.921498 | orchestrator | 2025-05-31 16:14:38.921510 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-05-31 16:14:38.921520 | orchestrator | Saturday 31 May 2025 16:13:41 +0000 (0:00:01.749) 0:00:03.582 ********** 2025-05-31 16:14:38.921531 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:14:38.921548 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:14:38.921559 | orchestrator | ok: [testbed-manager] 2025-05-31 16:14:38.921570 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:14:38.921581 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:14:38.921591 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:14:38.921602 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:14:38.921613 | orchestrator | 2025-05-31 16:14:38.921623 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-05-31 16:14:38.921641 | orchestrator | Saturday 31 May 2025 16:13:43 +0000 (0:00:02.463) 0:00:06.045 ********** 2025-05-31 16:14:38.921652 | orchestrator | ok: [testbed-manager] 2025-05-31 16:14:38.921663 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:14:38.921674 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:14:38.921684 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:14:38.921695 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:14:38.921706 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:14:38.921725 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:14:38.921743 | orchestrator | 2025-05-31 16:14:38.921766 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-05-31 16:14:38.921793 | orchestrator | Saturday 31 May 2025 16:13:46 +0000 (0:00:02.913) 0:00:08.959 ********** 2025-05-31 16:14:38.921810 | orchestrator | changed: [testbed-manager] 2025-05-31 16:14:38.921827 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:14:38.921844 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:14:38.921862 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:14:38.921880 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:14:38.921897 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:14:38.921916 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:14:38.921932 | orchestrator | 2025-05-31 16:14:38.921943 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-05-31 16:14:38.921954 | orchestrator | Saturday 31 May 2025 16:13:48 +0000 (0:00:02.183) 0:00:11.142 ********** 2025-05-31 16:14:38.921964 | orchestrator | changed: [testbed-manager] 2025-05-31 16:14:38.921975 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:14:38.921985 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:14:38.921996 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:14:38.922006 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:14:38.922102 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:14:38.922119 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:14:38.922129 | orchestrator | 2025-05-31 16:14:38.922140 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-05-31 16:14:38.922151 | orchestrator | Saturday 31 May 2025 16:13:58 +0000 (0:00:10.092) 0:00:21.234 ********** 2025-05-31 16:14:38.922161 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:14:38.922172 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:14:38.922183 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:14:38.922194 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:14:38.922204 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:14:38.922215 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:14:38.922225 | orchestrator | changed: [testbed-manager] 2025-05-31 16:14:38.922236 | orchestrator | 2025-05-31 16:14:38.922246 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-05-31 16:14:38.922257 | orchestrator | Saturday 31 May 2025 16:14:14 +0000 (0:00:15.630) 0:00:36.864 ********** 2025-05-31 16:14:38.922291 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:14:38.922312 | orchestrator | 2025-05-31 16:14:38.922323 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-05-31 16:14:38.922333 | orchestrator | Saturday 31 May 2025 16:14:16 +0000 (0:00:01.637) 0:00:38.502 ********** 2025-05-31 16:14:38.922344 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-05-31 16:14:38.922355 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-05-31 16:14:38.922375 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-05-31 16:14:38.922386 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-05-31 16:14:38.922396 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-05-31 16:14:38.922407 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-05-31 16:14:38.922417 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-05-31 16:14:38.922428 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-05-31 16:14:38.922438 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-05-31 16:14:38.922449 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-05-31 16:14:38.922459 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-05-31 16:14:38.922470 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-05-31 16:14:38.922480 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-05-31 16:14:38.922491 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-05-31 16:14:38.922502 | orchestrator | 2025-05-31 16:14:38.922512 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-05-31 16:14:38.922523 | orchestrator | Saturday 31 May 2025 16:14:22 +0000 (0:00:06.130) 0:00:44.632 ********** 2025-05-31 16:14:38.922534 | orchestrator | ok: [testbed-manager] 2025-05-31 16:14:38.922545 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:14:38.922555 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:14:38.922577 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:14:38.922588 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:14:38.922598 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:14:38.922609 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:14:38.922619 | orchestrator | 2025-05-31 16:14:38.922630 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-05-31 16:14:38.922640 | orchestrator | Saturday 31 May 2025 16:14:24 +0000 (0:00:02.252) 0:00:46.885 ********** 2025-05-31 16:14:38.922651 | orchestrator | changed: [testbed-manager] 2025-05-31 16:14:38.922662 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:14:38.922672 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:14:38.922683 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:14:38.922693 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:14:38.922704 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:14:38.922714 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:14:38.922725 | orchestrator | 2025-05-31 16:14:38.922736 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-05-31 16:14:38.922746 | orchestrator | Saturday 31 May 2025 16:14:26 +0000 (0:00:01.962) 0:00:48.848 ********** 2025-05-31 16:14:38.922757 | orchestrator | ok: [testbed-manager] 2025-05-31 16:14:38.922768 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:14:38.922778 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:14:38.922789 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:14:38.922809 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:14:38.922821 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:14:38.922831 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:14:38.922842 | orchestrator | 2025-05-31 16:14:38.922853 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-05-31 16:14:38.922863 | orchestrator | Saturday 31 May 2025 16:14:27 +0000 (0:00:01.537) 0:00:50.386 ********** 2025-05-31 16:14:38.922874 | orchestrator | ok: [testbed-manager] 2025-05-31 16:14:38.922884 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:14:38.922895 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:14:38.922905 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:14:38.922916 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:14:38.922926 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:14:38.922937 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:14:38.922947 | orchestrator | 2025-05-31 16:14:38.922958 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-05-31 16:14:38.922968 | orchestrator | Saturday 31 May 2025 16:14:30 +0000 (0:00:02.762) 0:00:53.149 ********** 2025-05-31 16:14:38.922985 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-05-31 16:14:38.922997 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:14:38.923008 | orchestrator | 2025-05-31 16:14:38.923019 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-05-31 16:14:38.923029 | orchestrator | Saturday 31 May 2025 16:14:32 +0000 (0:00:01.454) 0:00:54.603 ********** 2025-05-31 16:14:38.923040 | orchestrator | changed: [testbed-manager] 2025-05-31 16:14:38.923050 | orchestrator | 2025-05-31 16:14:38.923061 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-05-31 16:14:38.923072 | orchestrator | Saturday 31 May 2025 16:14:34 +0000 (0:00:01.947) 0:00:56.551 ********** 2025-05-31 16:14:38.923082 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:14:38.923093 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:14:38.923103 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:14:38.923114 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:14:38.923125 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:14:38.923135 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:14:38.923146 | orchestrator | changed: [testbed-manager] 2025-05-31 16:14:38.923157 | orchestrator | 2025-05-31 16:14:38.923167 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:14:38.923178 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:14:38.923190 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:14:38.923200 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:14:38.923211 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:14:38.923222 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:14:38.923233 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:14:38.923243 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:14:38.923254 | orchestrator | 2025-05-31 16:14:38.923265 | orchestrator | Saturday 31 May 2025 16:14:37 +0000 (0:00:03.294) 0:00:59.846 ********** 2025-05-31 16:14:38.923296 | orchestrator | =============================================================================== 2025-05-31 16:14:38.923307 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 15.63s 2025-05-31 16:14:38.923318 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.09s 2025-05-31 16:14:38.923328 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 6.13s 2025-05-31 16:14:38.923339 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.29s 2025-05-31 16:14:38.923350 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.91s 2025-05-31 16:14:38.923360 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.76s 2025-05-31 16:14:38.923371 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.46s 2025-05-31 16:14:38.923382 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 2.25s 2025-05-31 16:14:38.923392 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.18s 2025-05-31 16:14:38.923409 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.96s 2025-05-31 16:14:38.923419 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.95s 2025-05-31 16:14:38.923429 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.75s 2025-05-31 16:14:38.923440 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.64s 2025-05-31 16:14:38.923450 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.59s 2025-05-31 16:14:38.923467 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.54s 2025-05-31 16:14:38.923478 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.45s 2025-05-31 16:14:38.923489 | orchestrator | 2025-05-31 16:14:38 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:14:38.923500 | orchestrator | 2025-05-31 16:14:38 | INFO  | Task 64b532e6-17d5-44f4-923e-315927b2c504 is in state SUCCESS 2025-05-31 16:14:38.923511 | orchestrator | 2025-05-31 16:14:38 | INFO  | Task 0d9e82cd-a765-4ceb-9d93-7afb63353659 is in state STARTED 2025-05-31 16:14:38.923522 | orchestrator | 2025-05-31 16:14:38 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:14:41.972997 | orchestrator | 2025-05-31 16:14:41 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:14:41.973406 | orchestrator | 2025-05-31 16:14:41 | INFO  | Task edb89b7d-3cd0-49c4-81f0-307958583a45 is in state STARTED 2025-05-31 16:14:41.974143 | orchestrator | 2025-05-31 16:14:41 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:14:41.974478 | orchestrator | 2025-05-31 16:14:41 | INFO  | Task 0d9e82cd-a765-4ceb-9d93-7afb63353659 is in state STARTED 2025-05-31 16:14:41.974622 | orchestrator | 2025-05-31 16:14:41 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:14:45.007682 | orchestrator | 2025-05-31 16:14:45 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:14:45.007981 | orchestrator | 2025-05-31 16:14:45 | INFO  | Task edb89b7d-3cd0-49c4-81f0-307958583a45 is in state STARTED 2025-05-31 16:14:45.008764 | orchestrator | 2025-05-31 16:14:45 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:14:45.009443 | orchestrator | 2025-05-31 16:14:45 | INFO  | Task 0d9e82cd-a765-4ceb-9d93-7afb63353659 is in state STARTED 2025-05-31 16:14:45.009464 | orchestrator | 2025-05-31 16:14:45 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:14:48.035828 | orchestrator | 2025-05-31 16:14:48 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:14:48.035984 | orchestrator | 2025-05-31 16:14:48 | INFO  | Task edb89b7d-3cd0-49c4-81f0-307958583a45 is in state STARTED 2025-05-31 16:14:48.037528 | orchestrator | 2025-05-31 16:14:48 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:14:48.038866 | orchestrator | 2025-05-31 16:14:48 | INFO  | Task 0d9e82cd-a765-4ceb-9d93-7afb63353659 is in state STARTED 2025-05-31 16:14:48.039085 | orchestrator | 2025-05-31 16:14:48 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:14:51.089605 | orchestrator | 2025-05-31 16:14:51 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:14:51.089716 | orchestrator | 2025-05-31 16:14:51 | INFO  | Task edb89b7d-3cd0-49c4-81f0-307958583a45 is in state STARTED 2025-05-31 16:14:51.089838 | orchestrator | 2025-05-31 16:14:51 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:14:51.090804 | orchestrator | 2025-05-31 16:14:51 | INFO  | Task 0d9e82cd-a765-4ceb-9d93-7afb63353659 is in state STARTED 2025-05-31 16:14:51.090875 | orchestrator | 2025-05-31 16:14:51 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:14:54.123664 | orchestrator | 2025-05-31 16:14:54 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:14:54.124384 | orchestrator | 2025-05-31 16:14:54 | INFO  | Task edb89b7d-3cd0-49c4-81f0-307958583a45 is in state STARTED 2025-05-31 16:14:54.124915 | orchestrator | 2025-05-31 16:14:54 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:14:54.126441 | orchestrator | 2025-05-31 16:14:54 | INFO  | Task 0d9e82cd-a765-4ceb-9d93-7afb63353659 is in state STARTED 2025-05-31 16:14:54.126532 | orchestrator | 2025-05-31 16:14:54 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:14:57.163008 | orchestrator | 2025-05-31 16:14:57 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:14:57.163307 | orchestrator | 2025-05-31 16:14:57 | INFO  | Task edb89b7d-3cd0-49c4-81f0-307958583a45 is in state STARTED 2025-05-31 16:14:57.164014 | orchestrator | 2025-05-31 16:14:57 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:14:57.164865 | orchestrator | 2025-05-31 16:14:57 | INFO  | Task 0d9e82cd-a765-4ceb-9d93-7afb63353659 is in state STARTED 2025-05-31 16:14:57.165078 | orchestrator | 2025-05-31 16:14:57 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:15:00.219563 | orchestrator | 2025-05-31 16:15:00 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:15:00.224674 | orchestrator | 2025-05-31 16:15:00 | INFO  | Task edb89b7d-3cd0-49c4-81f0-307958583a45 is in state STARTED 2025-05-31 16:15:00.225342 | orchestrator | 2025-05-31 16:15:00 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:15:00.225903 | orchestrator | 2025-05-31 16:15:00 | INFO  | Task 0d9e82cd-a765-4ceb-9d93-7afb63353659 is in state STARTED 2025-05-31 16:15:00.225967 | orchestrator | 2025-05-31 16:15:00 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:15:03.258202 | orchestrator | 2025-05-31 16:15:03 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:15:03.258359 | orchestrator | 2025-05-31 16:15:03 | INFO  | Task edb89b7d-3cd0-49c4-81f0-307958583a45 is in state SUCCESS 2025-05-31 16:15:03.258774 | orchestrator | 2025-05-31 16:15:03 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:15:03.259858 | orchestrator | 2025-05-31 16:15:03 | INFO  | Task 0d9e82cd-a765-4ceb-9d93-7afb63353659 is in state STARTED 2025-05-31 16:15:03.259885 | orchestrator | 2025-05-31 16:15:03 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:15:06.307723 | orchestrator | 2025-05-31 16:15:06 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:15:06.311225 | orchestrator | 2025-05-31 16:15:06 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:15:06.314702 | orchestrator | 2025-05-31 16:15:06 | INFO  | Task 0d9e82cd-a765-4ceb-9d93-7afb63353659 is in state STARTED 2025-05-31 16:15:06.314748 | orchestrator | 2025-05-31 16:15:06 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:15:09.352823 | orchestrator | 2025-05-31 16:15:09 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:15:09.353001 | orchestrator | 2025-05-31 16:15:09 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:15:09.353470 | orchestrator | 2025-05-31 16:15:09 | INFO  | Task 0d9e82cd-a765-4ceb-9d93-7afb63353659 is in state STARTED 2025-05-31 16:15:09.353541 | orchestrator | 2025-05-31 16:15:09 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:15:12.406196 | orchestrator | 2025-05-31 16:15:12 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:15:12.407801 | orchestrator | 2025-05-31 16:15:12 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:15:12.412089 | orchestrator | 2025-05-31 16:15:12 | INFO  | Task 0d9e82cd-a765-4ceb-9d93-7afb63353659 is in state STARTED 2025-05-31 16:15:12.412179 | orchestrator | 2025-05-31 16:15:12 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:15:15.453086 | orchestrator | 2025-05-31 16:15:15 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:15:15.453746 | orchestrator | 2025-05-31 16:15:15 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:15:15.454531 | orchestrator | 2025-05-31 16:15:15 | INFO  | Task 0d9e82cd-a765-4ceb-9d93-7afb63353659 is in state STARTED 2025-05-31 16:15:15.454905 | orchestrator | 2025-05-31 16:15:15 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:15:18.499263 | orchestrator | 2025-05-31 16:15:18 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:15:18.500578 | orchestrator | 2025-05-31 16:15:18 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:15:18.503372 | orchestrator | 2025-05-31 16:15:18 | INFO  | Task 0d9e82cd-a765-4ceb-9d93-7afb63353659 is in state STARTED 2025-05-31 16:15:18.503395 | orchestrator | 2025-05-31 16:15:18 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:15:21.541555 | orchestrator | 2025-05-31 16:15:21 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:15:21.542329 | orchestrator | 2025-05-31 16:15:21 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:15:21.545155 | orchestrator | 2025-05-31 16:15:21 | INFO  | Task 0d9e82cd-a765-4ceb-9d93-7afb63353659 is in state STARTED 2025-05-31 16:15:21.545207 | orchestrator | 2025-05-31 16:15:21 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:15:24.594249 | orchestrator | 2025-05-31 16:15:24 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:15:24.595982 | orchestrator | 2025-05-31 16:15:24 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:15:24.596760 | orchestrator | 2025-05-31 16:15:24 | INFO  | Task 0d9e82cd-a765-4ceb-9d93-7afb63353659 is in state STARTED 2025-05-31 16:15:24.596796 | orchestrator | 2025-05-31 16:15:24 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:15:27.640132 | orchestrator | 2025-05-31 16:15:27 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:15:27.640737 | orchestrator | 2025-05-31 16:15:27 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:15:27.641651 | orchestrator | 2025-05-31 16:15:27 | INFO  | Task 0d9e82cd-a765-4ceb-9d93-7afb63353659 is in state STARTED 2025-05-31 16:15:27.641735 | orchestrator | 2025-05-31 16:15:27 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:15:30.679925 | orchestrator | 2025-05-31 16:15:30 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:15:30.682840 | orchestrator | 2025-05-31 16:15:30 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:15:30.682967 | orchestrator | 2025-05-31 16:15:30 | INFO  | Task 0d9e82cd-a765-4ceb-9d93-7afb63353659 is in state STARTED 2025-05-31 16:15:30.682994 | orchestrator | 2025-05-31 16:15:30 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:15:33.728513 | orchestrator | 2025-05-31 16:15:33 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:15:33.730438 | orchestrator | 2025-05-31 16:15:33 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:15:33.730508 | orchestrator | 2025-05-31 16:15:33 | INFO  | Task 0d9e82cd-a765-4ceb-9d93-7afb63353659 is in state STARTED 2025-05-31 16:15:33.730530 | orchestrator | 2025-05-31 16:15:33 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:15:36.776122 | orchestrator | 2025-05-31 16:15:36 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:15:36.776900 | orchestrator | 2025-05-31 16:15:36 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:15:36.777695 | orchestrator | 2025-05-31 16:15:36 | INFO  | Task 0d9e82cd-a765-4ceb-9d93-7afb63353659 is in state STARTED 2025-05-31 16:15:36.777781 | orchestrator | 2025-05-31 16:15:36 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:15:39.825926 | orchestrator | 2025-05-31 16:15:39 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:15:39.828965 | orchestrator | 2025-05-31 16:15:39 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:15:39.832528 | orchestrator | 2025-05-31 16:15:39 | INFO  | Task 0d9e82cd-a765-4ceb-9d93-7afb63353659 is in state STARTED 2025-05-31 16:15:39.832554 | orchestrator | 2025-05-31 16:15:39 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:15:42.886960 | orchestrator | 2025-05-31 16:15:42 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:15:42.887350 | orchestrator | 2025-05-31 16:15:42 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:15:42.888980 | orchestrator | 2025-05-31 16:15:42 | INFO  | Task 0d9e82cd-a765-4ceb-9d93-7afb63353659 is in state STARTED 2025-05-31 16:15:42.889010 | orchestrator | 2025-05-31 16:15:42 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:15:45.940825 | orchestrator | 2025-05-31 16:15:45 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:15:45.943380 | orchestrator | 2025-05-31 16:15:45 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:15:45.944671 | orchestrator | 2025-05-31 16:15:45 | INFO  | Task 0d9e82cd-a765-4ceb-9d93-7afb63353659 is in state STARTED 2025-05-31 16:15:45.945067 | orchestrator | 2025-05-31 16:15:45 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:15:49.018540 | orchestrator | 2025-05-31 16:15:49 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:15:49.021242 | orchestrator | 2025-05-31 16:15:49 | INFO  | Task ecd2b1c9-25c9-47bf-b1cf-5b41de9edbab is in state STARTED 2025-05-31 16:15:49.024661 | orchestrator | 2025-05-31 16:15:49 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:15:49.024703 | orchestrator | 2025-05-31 16:15:49 | INFO  | Task 889ce835-46f3-45ce-8abb-744aed4f8e74 is in state STARTED 2025-05-31 16:15:49.027890 | orchestrator | 2025-05-31 16:15:49 | INFO  | Task 7b07c6f8-5e3f-4dff-9686-6aa75e183258 is in state STARTED 2025-05-31 16:15:49.028994 | orchestrator | 2025-05-31 16:15:49 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:15:49.032479 | orchestrator | 2025-05-31 16:15:49 | INFO  | Task 0d9e82cd-a765-4ceb-9d93-7afb63353659 is in state SUCCESS 2025-05-31 16:15:49.037097 | orchestrator | 2025-05-31 16:15:49.037141 | orchestrator | 2025-05-31 16:15:49.037152 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-05-31 16:15:49.037162 | orchestrator | 2025-05-31 16:15:49.037171 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-05-31 16:15:49.037197 | orchestrator | Saturday 31 May 2025 16:13:51 +0000 (0:00:00.178) 0:00:00.178 ********** 2025-05-31 16:15:49.037225 | orchestrator | ok: [testbed-manager] 2025-05-31 16:15:49.037235 | orchestrator | 2025-05-31 16:15:49.037253 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-05-31 16:15:49.037262 | orchestrator | Saturday 31 May 2025 16:13:52 +0000 (0:00:00.825) 0:00:01.004 ********** 2025-05-31 16:15:49.037271 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-05-31 16:15:49.037280 | orchestrator | 2025-05-31 16:15:49.037311 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-05-31 16:15:49.037320 | orchestrator | Saturday 31 May 2025 16:13:53 +0000 (0:00:00.644) 0:00:01.649 ********** 2025-05-31 16:15:49.037329 | orchestrator | changed: [testbed-manager] 2025-05-31 16:15:49.037338 | orchestrator | 2025-05-31 16:15:49.037346 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-05-31 16:15:49.037355 | orchestrator | Saturday 31 May 2025 16:13:54 +0000 (0:00:01.269) 0:00:02.918 ********** 2025-05-31 16:15:49.037364 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-05-31 16:15:49.037372 | orchestrator | ok: [testbed-manager] 2025-05-31 16:15:49.037381 | orchestrator | 2025-05-31 16:15:49.037390 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-05-31 16:15:49.037398 | orchestrator | Saturday 31 May 2025 16:14:58 +0000 (0:01:03.591) 0:01:06.510 ********** 2025-05-31 16:15:49.037407 | orchestrator | changed: [testbed-manager] 2025-05-31 16:15:49.037415 | orchestrator | 2025-05-31 16:15:49.037424 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:15:49.037490 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:15:49.037504 | orchestrator | 2025-05-31 16:15:49.037559 | orchestrator | Saturday 31 May 2025 16:15:02 +0000 (0:00:03.848) 0:01:10.358 ********** 2025-05-31 16:15:49.037569 | orchestrator | =============================================================================== 2025-05-31 16:15:49.037578 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 63.59s 2025-05-31 16:15:49.037588 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.85s 2025-05-31 16:15:49.037596 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.27s 2025-05-31 16:15:49.037605 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.83s 2025-05-31 16:15:49.037615 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.64s 2025-05-31 16:15:49.037630 | orchestrator | 2025-05-31 16:15:49.037639 | orchestrator | 2025-05-31 16:15:49.037648 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-05-31 16:15:49.037657 | orchestrator | 2025-05-31 16:15:49.037665 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-31 16:15:49.037674 | orchestrator | Saturday 31 May 2025 16:13:33 +0000 (0:00:00.213) 0:00:00.213 ********** 2025-05-31 16:15:49.037683 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:15:49.037694 | orchestrator | 2025-05-31 16:15:49.037703 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-05-31 16:15:49.037713 | orchestrator | Saturday 31 May 2025 16:13:34 +0000 (0:00:01.247) 0:00:01.461 ********** 2025-05-31 16:15:49.037723 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-31 16:15:49.037733 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-31 16:15:49.037743 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-31 16:15:49.037753 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-31 16:15:49.037769 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-31 16:15:49.037778 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-31 16:15:49.037794 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-31 16:15:49.037804 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-31 16:15:49.037819 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-31 16:15:49.037828 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-31 16:15:49.037838 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-31 16:15:49.037847 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-31 16:15:49.037855 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-31 16:15:49.037864 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-31 16:15:49.037872 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-31 16:15:49.037881 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-31 16:15:49.037890 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-31 16:15:49.037913 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-31 16:15:49.037922 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-31 16:15:49.037931 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-31 16:15:49.037940 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-31 16:15:49.037949 | orchestrator | 2025-05-31 16:15:49.037957 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-31 16:15:49.037966 | orchestrator | Saturday 31 May 2025 16:13:38 +0000 (0:00:03.526) 0:00:04.987 ********** 2025-05-31 16:15:49.037975 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:15:49.037984 | orchestrator | 2025-05-31 16:15:49.037993 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-05-31 16:15:49.038001 | orchestrator | Saturday 31 May 2025 16:13:39 +0000 (0:00:01.642) 0:00:06.630 ********** 2025-05-31 16:15:49.038078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 16:15:49.038094 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 16:15:49.038104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 16:15:49.038120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 16:15:49.038133 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 16:15:49.038143 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 16:15:49.038159 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.038169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.038178 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 16:15:49.038187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.038202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.038218 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.038234 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.038277 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.038361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.038373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.038382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.038398 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.038408 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.038417 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.038436 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.038445 | orchestrator | 2025-05-31 16:15:49.038462 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-05-31 16:15:49.038472 | orchestrator | Saturday 31 May 2025 16:13:44 +0000 (0:00:04.821) 0:00:11.451 ********** 2025-05-31 16:15:49.038536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-31 16:15:49.038631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:15:49.038641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:15:49.038657 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-31 16:15:49.038666 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:15:49.038679 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:15:49.038688 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:15:49.038698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-31 16:15:49.038720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:15:49.038730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:15:49.038739 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:15:49.038748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-31 16:15:49.038762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:15:49.038772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:15:49.038781 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:15:49.038789 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-31 16:15:49.038799 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:15:49.038811 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:15:49.038821 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:15:49.038830 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:15:49.038844 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-31 16:15:49.038854 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:15:49.038868 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:15:49.038877 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:15:49.038886 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-31 16:15:49.038896 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:15:49.038909 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:15:49.038918 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:15:49.038927 | orchestrator | 2025-05-31 16:15:49.038936 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-05-31 16:15:49.038945 | orchestrator | Saturday 31 May 2025 16:13:45 +0000 (0:00:01.202) 0:00:12.654 ********** 2025-05-31 16:15:49.038954 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-31 16:15:49.038969 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:15:49.038978 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:15:49.038992 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:15:49.039001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-31 16:15:49.039020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-31 16:15:49.039030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:15:49.039043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:15:49.039052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:15:49.039106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timez2025-05-31 16:15:49 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:15:49.039117 | orchestrator | one:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:15:49.039161 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:15:49.039171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-31 16:15:49.039186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:15:49.039196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:15:49.039205 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:15:49.039214 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-31 16:15:49.039223 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:15:49.039237 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:15:49.039247 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:15:49.039264 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:15:49.039280 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-31 16:15:49.039355 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:15:49.039372 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:15:49.039381 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:15:49.039390 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-31 16:15:49.039399 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:15:49.039409 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:15:49.039418 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:15:49.039427 | orchestrator | 2025-05-31 16:15:49.039435 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-05-31 16:15:49.039444 | orchestrator | Saturday 31 May 2025 16:13:48 +0000 (0:00:02.477) 0:00:15.131 ********** 2025-05-31 16:15:49.039453 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:15:49.039461 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:15:49.039470 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:15:49.039478 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:15:49.039487 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:15:49.039495 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:15:49.039507 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:15:49.039521 | orchestrator | 2025-05-31 16:15:49.039535 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-05-31 16:15:49.039544 | orchestrator | Saturday 31 May 2025 16:13:49 +0000 (0:00:00.957) 0:00:16.088 ********** 2025-05-31 16:15:49.039553 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:15:49.039561 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:15:49.039570 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:15:49.039587 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:15:49.039596 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:15:49.039604 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:15:49.039613 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:15:49.039621 | orchestrator | 2025-05-31 16:15:49.039629 | orchestrator | TASK [common : Ensure fluentd image is present for label check] **************** 2025-05-31 16:15:49.039638 | orchestrator | Saturday 31 May 2025 16:13:50 +0000 (0:00:00.720) 0:00:16.808 ********** 2025-05-31 16:15:49.039647 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:15:49.039655 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:15:49.039664 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:15:49.039672 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:15:49.039681 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:15:49.039689 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:15:49.039698 | orchestrator | changed: [testbed-manager] 2025-05-31 16:15:49.039706 | orchestrator | 2025-05-31 16:15:49.039715 | orchestrator | TASK [common : Fetch fluentd Docker image labels] ****************************** 2025-05-31 16:15:49.039728 | orchestrator | Saturday 31 May 2025 16:14:21 +0000 (0:00:31.358) 0:00:48.166 ********** 2025-05-31 16:15:49.039738 | orchestrator | ok: [testbed-manager] 2025-05-31 16:15:49.039746 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:15:49.039755 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:15:49.039763 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:15:49.039772 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:15:49.039780 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:15:49.039789 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:15:49.039797 | orchestrator | 2025-05-31 16:15:49.039806 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-05-31 16:15:49.039815 | orchestrator | Saturday 31 May 2025 16:14:24 +0000 (0:00:02.884) 0:00:51.051 ********** 2025-05-31 16:15:49.039824 | orchestrator | ok: [testbed-manager] 2025-05-31 16:15:49.039832 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:15:49.039841 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:15:49.039849 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:15:49.039858 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:15:49.039865 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:15:49.039873 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:15:49.039881 | orchestrator | 2025-05-31 16:15:49.039889 | orchestrator | TASK [common : Fetch fluentd Podman image labels] ****************************** 2025-05-31 16:15:49.039896 | orchestrator | Saturday 31 May 2025 16:14:25 +0000 (0:00:01.244) 0:00:52.296 ********** 2025-05-31 16:15:49.039904 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:15:49.039912 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:15:49.039920 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:15:49.039927 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:15:49.039935 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:15:49.039943 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:15:49.039950 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:15:49.039958 | orchestrator | 2025-05-31 16:15:49.039966 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-05-31 16:15:49.039974 | orchestrator | Saturday 31 May 2025 16:14:26 +0000 (0:00:01.003) 0:00:53.299 ********** 2025-05-31 16:15:49.039981 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:15:49.039989 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:15:49.039997 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:15:49.040004 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:15:49.040012 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:15:49.040020 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:15:49.040027 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:15:49.040035 | orchestrator | 2025-05-31 16:15:49.040042 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-05-31 16:15:49.040050 | orchestrator | Saturday 31 May 2025 16:14:27 +0000 (0:00:00.957) 0:00:54.257 ********** 2025-05-31 16:15:49.040059 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 16:15:49.040072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 16:15:49.040080 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.040089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 16:15:49.040108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 16:15:49.040116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.040125 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.040133 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 16:15:49.040150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.040158 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 16:15:49.040170 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 16:15:49.040184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.040192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.040201 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.040209 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.040223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.040232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.040244 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.040252 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.040301 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.040311 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.040320 | orchestrator | 2025-05-31 16:15:49.040328 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-05-31 16:15:49.040336 | orchestrator | Saturday 31 May 2025 16:14:33 +0000 (0:00:05.631) 0:00:59.888 ********** 2025-05-31 16:15:49.040344 | orchestrator | [WARNING]: Skipped 2025-05-31 16:15:49.040352 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-05-31 16:15:49.040360 | orchestrator | to this access issue: 2025-05-31 16:15:49.040368 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-05-31 16:15:49.040381 | orchestrator | directory 2025-05-31 16:15:49.040390 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-31 16:15:49.040398 | orchestrator | 2025-05-31 16:15:49.040406 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-05-31 16:15:49.040413 | orchestrator | Saturday 31 May 2025 16:14:33 +0000 (0:00:00.740) 0:01:00.629 ********** 2025-05-31 16:15:49.040421 | orchestrator | [WARNING]: Skipped 2025-05-31 16:15:49.040429 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-05-31 16:15:49.040437 | orchestrator | to this access issue: 2025-05-31 16:15:49.040445 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-05-31 16:15:49.040452 | orchestrator | directory 2025-05-31 16:15:49.040460 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-31 16:15:49.040468 | orchestrator | 2025-05-31 16:15:49.040476 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-05-31 16:15:49.040484 | orchestrator | Saturday 31 May 2025 16:14:34 +0000 (0:00:00.617) 0:01:01.247 ********** 2025-05-31 16:15:49.040491 | orchestrator | [WARNING]: Skipped 2025-05-31 16:15:49.040499 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-05-31 16:15:49.040507 | orchestrator | to this access issue: 2025-05-31 16:15:49.040515 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-05-31 16:15:49.040523 | orchestrator | directory 2025-05-31 16:15:49.040530 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-31 16:15:49.040538 | orchestrator | 2025-05-31 16:15:49.040546 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-05-31 16:15:49.040554 | orchestrator | Saturday 31 May 2025 16:14:35 +0000 (0:00:00.498) 0:01:01.745 ********** 2025-05-31 16:15:49.040562 | orchestrator | [WARNING]: Skipped 2025-05-31 16:15:49.040570 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-05-31 16:15:49.040577 | orchestrator | to this access issue: 2025-05-31 16:15:49.040585 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-05-31 16:15:49.040593 | orchestrator | directory 2025-05-31 16:15:49.040601 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-31 16:15:49.040609 | orchestrator | 2025-05-31 16:15:49.040619 | orchestrator | TASK [common : Copying over td-agent.conf] ************************************* 2025-05-31 16:15:49.040633 | orchestrator | Saturday 31 May 2025 16:14:35 +0000 (0:00:00.578) 0:01:02.323 ********** 2025-05-31 16:15:49.040641 | orchestrator | changed: [testbed-manager] 2025-05-31 16:15:49.040651 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:15:49.040663 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:15:49.040671 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:15:49.040679 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:15:49.040687 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:15:49.040694 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:15:49.040702 | orchestrator | 2025-05-31 16:15:49.040710 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-05-31 16:15:49.040718 | orchestrator | Saturday 31 May 2025 16:14:39 +0000 (0:00:03.835) 0:01:06.158 ********** 2025-05-31 16:15:49.040729 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-31 16:15:49.040738 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-31 16:15:49.040746 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-31 16:15:49.040753 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-31 16:15:49.040761 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-31 16:15:49.040769 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-31 16:15:49.040791 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-31 16:15:49.040868 | orchestrator | 2025-05-31 16:15:49.040879 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-05-31 16:15:49.040887 | orchestrator | Saturday 31 May 2025 16:14:42 +0000 (0:00:02.600) 0:01:08.759 ********** 2025-05-31 16:15:49.040895 | orchestrator | changed: [testbed-manager] 2025-05-31 16:15:49.040903 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:15:49.040911 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:15:49.040925 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:15:49.040933 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:15:49.040947 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:15:49.040957 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:15:49.040965 | orchestrator | 2025-05-31 16:15:49.040973 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-05-31 16:15:49.040981 | orchestrator | Saturday 31 May 2025 16:14:44 +0000 (0:00:02.307) 0:01:11.067 ********** 2025-05-31 16:15:49.040990 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 16:15:49.040998 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:15:49.041006 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 16:15:49.041015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:15:49.041023 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.041046 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.041060 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 16:15:49.041075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:15:49.041084 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 16:15:49.041092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:15:49.041100 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 16:15:49.041108 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:15:49.041120 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 16:15:49.041135 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:15:49.041149 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.041158 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.041167 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.041175 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 16:15:49.041184 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:15:49.041192 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.041211 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.041219 | orchestrator | 2025-05-31 16:15:49.041227 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-05-31 16:15:49.041235 | orchestrator | Saturday 31 May 2025 16:14:46 +0000 (0:00:02.341) 0:01:13.408 ********** 2025-05-31 16:15:49.041243 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-31 16:15:49.041251 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-31 16:15:49.041259 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-31 16:15:49.041267 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-31 16:15:49.041275 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-31 16:15:49.041325 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-31 16:15:49.041340 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-31 16:15:49.041348 | orchestrator | 2025-05-31 16:15:49.041356 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-05-31 16:15:49.041364 | orchestrator | Saturday 31 May 2025 16:14:48 +0000 (0:00:02.101) 0:01:15.510 ********** 2025-05-31 16:15:49.041372 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-31 16:15:49.041380 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-31 16:15:49.041388 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-31 16:15:49.041396 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-31 16:15:49.041419 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-31 16:15:49.041428 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-31 16:15:49.041435 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-31 16:15:49.041451 | orchestrator | 2025-05-31 16:15:49.041459 | orchestrator | TASK [common : Check common containers] **************************************** 2025-05-31 16:15:49.041466 | orchestrator | Saturday 31 May 2025 16:14:51 +0000 (0:00:02.901) 0:01:18.411 ********** 2025-05-31 16:15:49.041475 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 16:15:49.041483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 16:15:49.041498 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.041607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 16:15:49.041619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 16:15:49.041639 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 16:15:49.041648 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.041656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.041665 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 16:15:49.041679 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 16:15:49.041687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.041699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.041713 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.041722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.041731 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.041739 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.041752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.041760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.041769 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.041781 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.041790 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:15:49.041798 | orchestrator | 2025-05-31 16:15:49.041806 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-05-31 16:15:49.041819 | orchestrator | Saturday 31 May 2025 16:14:54 +0000 (0:00:02.984) 0:01:21.396 ********** 2025-05-31 16:15:49.041827 | orchestrator | changed: [testbed-manager] 2025-05-31 16:15:49.041835 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:15:49.041843 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:15:49.041851 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:15:49.041857 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:15:49.041864 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:15:49.041871 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:15:49.041877 | orchestrator | 2025-05-31 16:15:49.041884 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-05-31 16:15:49.041891 | orchestrator | Saturday 31 May 2025 16:14:56 +0000 (0:00:01.759) 0:01:23.155 ********** 2025-05-31 16:15:49.041897 | orchestrator | changed: [testbed-manager] 2025-05-31 16:15:49.041904 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:15:49.041911 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:15:49.041918 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:15:49.041932 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:15:49.041938 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:15:49.041945 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:15:49.041951 | orchestrator | 2025-05-31 16:15:49.041958 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-31 16:15:49.041969 | orchestrator | Saturday 31 May 2025 16:14:57 +0000 (0:00:01.164) 0:01:24.320 ********** 2025-05-31 16:15:49.041976 | orchestrator | 2025-05-31 16:15:49.041983 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-31 16:15:49.041990 | orchestrator | Saturday 31 May 2025 16:14:57 +0000 (0:00:00.053) 0:01:24.373 ********** 2025-05-31 16:15:49.041996 | orchestrator | 2025-05-31 16:15:49.042097 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-31 16:15:49.042104 | orchestrator | Saturday 31 May 2025 16:14:57 +0000 (0:00:00.048) 0:01:24.421 ********** 2025-05-31 16:15:49.042111 | orchestrator | 2025-05-31 16:15:49.042117 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-31 16:15:49.042124 | orchestrator | Saturday 31 May 2025 16:14:57 +0000 (0:00:00.048) 0:01:24.470 ********** 2025-05-31 16:15:49.042131 | orchestrator | 2025-05-31 16:15:49.042137 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-31 16:15:49.042144 | orchestrator | Saturday 31 May 2025 16:14:57 +0000 (0:00:00.153) 0:01:24.624 ********** 2025-05-31 16:15:49.042151 | orchestrator | 2025-05-31 16:15:49.042158 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-31 16:15:49.042164 | orchestrator | Saturday 31 May 2025 16:14:58 +0000 (0:00:00.047) 0:01:24.671 ********** 2025-05-31 16:15:49.042171 | orchestrator | 2025-05-31 16:15:49.042177 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-31 16:15:49.042184 | orchestrator | Saturday 31 May 2025 16:14:58 +0000 (0:00:00.047) 0:01:24.719 ********** 2025-05-31 16:15:49.042191 | orchestrator | 2025-05-31 16:15:49.042198 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-05-31 16:15:49.042204 | orchestrator | Saturday 31 May 2025 16:14:58 +0000 (0:00:00.063) 0:01:24.782 ********** 2025-05-31 16:15:49.042211 | orchestrator | changed: [testbed-manager] 2025-05-31 16:15:49.042218 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:15:49.042224 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:15:49.042231 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:15:49.042238 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:15:49.042251 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:15:49.042258 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:15:49.042265 | orchestrator | 2025-05-31 16:15:49.042271 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-05-31 16:15:49.042278 | orchestrator | Saturday 31 May 2025 16:15:07 +0000 (0:00:09.024) 0:01:33.807 ********** 2025-05-31 16:15:49.042297 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:15:49.042304 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:15:49.042311 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:15:49.042317 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:15:49.042350 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:15:49.042358 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:15:49.042364 | orchestrator | changed: [testbed-manager] 2025-05-31 16:15:49.042371 | orchestrator | 2025-05-31 16:15:49.042378 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-05-31 16:15:49.042384 | orchestrator | Saturday 31 May 2025 16:15:32 +0000 (0:00:25.764) 0:01:59.571 ********** 2025-05-31 16:15:49.042391 | orchestrator | ok: [testbed-manager] 2025-05-31 16:15:49.042398 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:15:49.042405 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:15:49.042411 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:15:49.042418 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:15:49.042425 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:15:49.042431 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:15:49.042438 | orchestrator | 2025-05-31 16:15:49.042449 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-05-31 16:15:49.042456 | orchestrator | Saturday 31 May 2025 16:15:35 +0000 (0:00:02.371) 0:02:01.943 ********** 2025-05-31 16:15:49.042462 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:15:49.042475 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:15:49.042482 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:15:49.042488 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:15:49.042495 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:15:49.042501 | orchestrator | changed: [testbed-manager] 2025-05-31 16:15:49.042508 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:15:49.042515 | orchestrator | 2025-05-31 16:15:49.042521 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:15:49.042529 | orchestrator | testbed-manager : ok=25  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-31 16:15:49.042537 | orchestrator | testbed-node-0 : ok=21  changed=14  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-31 16:15:49.042550 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-31 16:15:49.042557 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-31 16:15:49.042564 | orchestrator | testbed-node-3 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-31 16:15:49.042571 | orchestrator | testbed-node-4 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-31 16:15:49.042578 | orchestrator | testbed-node-5 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-31 16:15:49.042584 | orchestrator | 2025-05-31 16:15:49.042591 | orchestrator | 2025-05-31 16:15:49.042597 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 16:15:49.042604 | orchestrator | Saturday 31 May 2025 16:15:47 +0000 (0:00:12.024) 0:02:13.967 ********** 2025-05-31 16:15:49.042611 | orchestrator | =============================================================================== 2025-05-31 16:15:49.042617 | orchestrator | common : Ensure fluentd image is present for label check --------------- 31.36s 2025-05-31 16:15:49.042624 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 25.76s 2025-05-31 16:15:49.042631 | orchestrator | common : Restart cron container ---------------------------------------- 12.02s 2025-05-31 16:15:49.042637 | orchestrator | common : Restart fluentd container -------------------------------------- 9.02s 2025-05-31 16:15:49.042644 | orchestrator | common : Copying over config.json files for services -------------------- 5.63s 2025-05-31 16:15:49.042650 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.82s 2025-05-31 16:15:49.042657 | orchestrator | common : Copying over td-agent.conf ------------------------------------- 3.84s 2025-05-31 16:15:49.042664 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.53s 2025-05-31 16:15:49.042670 | orchestrator | common : Check common containers ---------------------------------------- 2.98s 2025-05-31 16:15:49.042677 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.90s 2025-05-31 16:15:49.042683 | orchestrator | common : Fetch fluentd Docker image labels ------------------------------ 2.88s 2025-05-31 16:15:49.042690 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.60s 2025-05-31 16:15:49.042697 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.48s 2025-05-31 16:15:49.042703 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.37s 2025-05-31 16:15:49.042710 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.34s 2025-05-31 16:15:49.042717 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.31s 2025-05-31 16:15:49.042723 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.10s 2025-05-31 16:15:49.042734 | orchestrator | common : Creating log volume -------------------------------------------- 1.76s 2025-05-31 16:15:49.042748 | orchestrator | common : include_tasks -------------------------------------------------- 1.64s 2025-05-31 16:15:49.042755 | orchestrator | common : include_tasks -------------------------------------------------- 1.25s 2025-05-31 16:15:52.071666 | orchestrator | 2025-05-31 16:15:52 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:15:52.071856 | orchestrator | 2025-05-31 16:15:52 | INFO  | Task ecd2b1c9-25c9-47bf-b1cf-5b41de9edbab is in state STARTED 2025-05-31 16:15:52.072384 | orchestrator | 2025-05-31 16:15:52 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:15:52.073030 | orchestrator | 2025-05-31 16:15:52 | INFO  | Task 889ce835-46f3-45ce-8abb-744aed4f8e74 is in state STARTED 2025-05-31 16:15:52.073456 | orchestrator | 2025-05-31 16:15:52 | INFO  | Task 7b07c6f8-5e3f-4dff-9686-6aa75e183258 is in state STARTED 2025-05-31 16:15:52.074254 | orchestrator | 2025-05-31 16:15:52 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:15:52.074283 | orchestrator | 2025-05-31 16:15:52 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:15:55.103688 | orchestrator | 2025-05-31 16:15:55 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:15:55.103990 | orchestrator | 2025-05-31 16:15:55 | INFO  | Task ecd2b1c9-25c9-47bf-b1cf-5b41de9edbab is in state STARTED 2025-05-31 16:15:55.107341 | orchestrator | 2025-05-31 16:15:55 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:15:55.107942 | orchestrator | 2025-05-31 16:15:55 | INFO  | Task 889ce835-46f3-45ce-8abb-744aed4f8e74 is in state STARTED 2025-05-31 16:15:55.108788 | orchestrator | 2025-05-31 16:15:55 | INFO  | Task 7b07c6f8-5e3f-4dff-9686-6aa75e183258 is in state STARTED 2025-05-31 16:15:55.110741 | orchestrator | 2025-05-31 16:15:55 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:15:55.110821 | orchestrator | 2025-05-31 16:15:55 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:15:58.146803 | orchestrator | 2025-05-31 16:15:58 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:15:58.149654 | orchestrator | 2025-05-31 16:15:58 | INFO  | Task ecd2b1c9-25c9-47bf-b1cf-5b41de9edbab is in state STARTED 2025-05-31 16:15:58.151225 | orchestrator | 2025-05-31 16:15:58 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:15:58.152830 | orchestrator | 2025-05-31 16:15:58 | INFO  | Task 889ce835-46f3-45ce-8abb-744aed4f8e74 is in state STARTED 2025-05-31 16:15:58.154152 | orchestrator | 2025-05-31 16:15:58 | INFO  | Task 7b07c6f8-5e3f-4dff-9686-6aa75e183258 is in state STARTED 2025-05-31 16:15:58.156611 | orchestrator | 2025-05-31 16:15:58 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:15:58.156659 | orchestrator | 2025-05-31 16:15:58 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:16:01.190656 | orchestrator | 2025-05-31 16:16:01 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:16:01.190940 | orchestrator | 2025-05-31 16:16:01 | INFO  | Task ecd2b1c9-25c9-47bf-b1cf-5b41de9edbab is in state STARTED 2025-05-31 16:16:01.193035 | orchestrator | 2025-05-31 16:16:01 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:16:01.195198 | orchestrator | 2025-05-31 16:16:01 | INFO  | Task 889ce835-46f3-45ce-8abb-744aed4f8e74 is in state STARTED 2025-05-31 16:16:01.195220 | orchestrator | 2025-05-31 16:16:01 | INFO  | Task 7b07c6f8-5e3f-4dff-9686-6aa75e183258 is in state STARTED 2025-05-31 16:16:01.197159 | orchestrator | 2025-05-31 16:16:01 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:16:01.197183 | orchestrator | 2025-05-31 16:16:01 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:16:04.238264 | orchestrator | 2025-05-31 16:16:04 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:16:04.241079 | orchestrator | 2025-05-31 16:16:04 | INFO  | Task ecd2b1c9-25c9-47bf-b1cf-5b41de9edbab is in state STARTED 2025-05-31 16:16:04.245003 | orchestrator | 2025-05-31 16:16:04 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:16:04.245238 | orchestrator | 2025-05-31 16:16:04 | INFO  | Task 889ce835-46f3-45ce-8abb-744aed4f8e74 is in state STARTED 2025-05-31 16:16:04.246113 | orchestrator | 2025-05-31 16:16:04 | INFO  | Task 7b07c6f8-5e3f-4dff-9686-6aa75e183258 is in state STARTED 2025-05-31 16:16:04.246702 | orchestrator | 2025-05-31 16:16:04 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:16:04.246727 | orchestrator | 2025-05-31 16:16:04 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:16:07.298722 | orchestrator | 2025-05-31 16:16:07 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:16:07.299107 | orchestrator | 2025-05-31 16:16:07 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state STARTED 2025-05-31 16:16:07.299436 | orchestrator | 2025-05-31 16:16:07 | INFO  | Task ecd2b1c9-25c9-47bf-b1cf-5b41de9edbab is in state SUCCESS 2025-05-31 16:16:07.300267 | orchestrator | 2025-05-31 16:16:07 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:16:07.300727 | orchestrator | 2025-05-31 16:16:07 | INFO  | Task 889ce835-46f3-45ce-8abb-744aed4f8e74 is in state STARTED 2025-05-31 16:16:07.301171 | orchestrator | 2025-05-31 16:16:07 | INFO  | Task 7b07c6f8-5e3f-4dff-9686-6aa75e183258 is in state STARTED 2025-05-31 16:16:07.302474 | orchestrator | 2025-05-31 16:16:07 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:16:07.302509 | orchestrator | 2025-05-31 16:16:07 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:16:10.340871 | orchestrator | 2025-05-31 16:16:10 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:16:10.341775 | orchestrator | 2025-05-31 16:16:10 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state STARTED 2025-05-31 16:16:10.343169 | orchestrator | 2025-05-31 16:16:10 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:16:10.345102 | orchestrator | 2025-05-31 16:16:10 | INFO  | Task 889ce835-46f3-45ce-8abb-744aed4f8e74 is in state STARTED 2025-05-31 16:16:10.348080 | orchestrator | 2025-05-31 16:16:10 | INFO  | Task 7b07c6f8-5e3f-4dff-9686-6aa75e183258 is in state STARTED 2025-05-31 16:16:10.349442 | orchestrator | 2025-05-31 16:16:10 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:16:10.349474 | orchestrator | 2025-05-31 16:16:10 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:16:13.393395 | orchestrator | 2025-05-31 16:16:13 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:16:13.393617 | orchestrator | 2025-05-31 16:16:13 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state STARTED 2025-05-31 16:16:13.400676 | orchestrator | 2025-05-31 16:16:13 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:16:13.400714 | orchestrator | 2025-05-31 16:16:13 | INFO  | Task 889ce835-46f3-45ce-8abb-744aed4f8e74 is in state STARTED 2025-05-31 16:16:13.400744 | orchestrator | 2025-05-31 16:16:13 | INFO  | Task 7b07c6f8-5e3f-4dff-9686-6aa75e183258 is in state STARTED 2025-05-31 16:16:13.400752 | orchestrator | 2025-05-31 16:16:13 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:16:13.400760 | orchestrator | 2025-05-31 16:16:13 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:16:16.428573 | orchestrator | 2025-05-31 16:16:16 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:16:16.429537 | orchestrator | 2025-05-31 16:16:16 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state STARTED 2025-05-31 16:16:16.429574 | orchestrator | 2025-05-31 16:16:16 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:16:16.430083 | orchestrator | 2025-05-31 16:16:16 | INFO  | Task 889ce835-46f3-45ce-8abb-744aed4f8e74 is in state STARTED 2025-05-31 16:16:16.432212 | orchestrator | 2025-05-31 16:16:16 | INFO  | Task 7b07c6f8-5e3f-4dff-9686-6aa75e183258 is in state STARTED 2025-05-31 16:16:16.432765 | orchestrator | 2025-05-31 16:16:16 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:16:16.432848 | orchestrator | 2025-05-31 16:16:16 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:16:19.465410 | orchestrator | 2025-05-31 16:16:19 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:16:19.465717 | orchestrator | 2025-05-31 16:16:19 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state STARTED 2025-05-31 16:16:19.466406 | orchestrator | 2025-05-31 16:16:19 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:16:19.467049 | orchestrator | 2025-05-31 16:16:19 | INFO  | Task 889ce835-46f3-45ce-8abb-744aed4f8e74 is in state STARTED 2025-05-31 16:16:19.467745 | orchestrator | 2025-05-31 16:16:19 | INFO  | Task 7b07c6f8-5e3f-4dff-9686-6aa75e183258 is in state STARTED 2025-05-31 16:16:19.468414 | orchestrator | 2025-05-31 16:16:19 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:16:19.468481 | orchestrator | 2025-05-31 16:16:19 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:16:22.506102 | orchestrator | 2025-05-31 16:16:22 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:16:22.507444 | orchestrator | 2025-05-31 16:16:22 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state STARTED 2025-05-31 16:16:22.509104 | orchestrator | 2025-05-31 16:16:22 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:16:22.510623 | orchestrator | 2025-05-31 16:16:22 | INFO  | Task 889ce835-46f3-45ce-8abb-744aed4f8e74 is in state SUCCESS 2025-05-31 16:16:22.511329 | orchestrator | 2025-05-31 16:16:22.511371 | orchestrator | 2025-05-31 16:16:22.511397 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-31 16:16:22.511417 | orchestrator | 2025-05-31 16:16:22.511452 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-31 16:16:22.511470 | orchestrator | Saturday 31 May 2025 16:15:50 +0000 (0:00:00.205) 0:00:00.205 ********** 2025-05-31 16:16:22.511482 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:16:22.511494 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:16:22.511504 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:16:22.511515 | orchestrator | 2025-05-31 16:16:22.511526 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-31 16:16:22.511537 | orchestrator | Saturday 31 May 2025 16:15:51 +0000 (0:00:00.361) 0:00:00.566 ********** 2025-05-31 16:16:22.511549 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-05-31 16:16:22.511560 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-05-31 16:16:22.511595 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-05-31 16:16:22.511606 | orchestrator | 2025-05-31 16:16:22.511617 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-05-31 16:16:22.511627 | orchestrator | 2025-05-31 16:16:22.511638 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-05-31 16:16:22.511648 | orchestrator | Saturday 31 May 2025 16:15:51 +0000 (0:00:00.349) 0:00:00.915 ********** 2025-05-31 16:16:22.511659 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:16:22.511670 | orchestrator | 2025-05-31 16:16:22.511681 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-05-31 16:16:22.511691 | orchestrator | Saturday 31 May 2025 16:15:52 +0000 (0:00:00.771) 0:00:01.687 ********** 2025-05-31 16:16:22.511702 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-31 16:16:22.511713 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-31 16:16:22.511723 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-31 16:16:22.511734 | orchestrator | 2025-05-31 16:16:22.511745 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-05-31 16:16:22.511755 | orchestrator | Saturday 31 May 2025 16:15:53 +0000 (0:00:00.928) 0:00:02.616 ********** 2025-05-31 16:16:22.511766 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-31 16:16:22.511777 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-31 16:16:22.511787 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-31 16:16:22.511798 | orchestrator | 2025-05-31 16:16:22.511808 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-05-31 16:16:22.511819 | orchestrator | Saturday 31 May 2025 16:15:55 +0000 (0:00:01.875) 0:00:04.492 ********** 2025-05-31 16:16:22.511830 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:16:22.511841 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:16:22.511852 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:16:22.511862 | orchestrator | 2025-05-31 16:16:22.511873 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-05-31 16:16:22.511883 | orchestrator | Saturday 31 May 2025 16:15:57 +0000 (0:00:02.448) 0:00:06.940 ********** 2025-05-31 16:16:22.511932 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:16:22.511946 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:16:22.511960 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:16:22.511972 | orchestrator | 2025-05-31 16:16:22.511987 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:16:22.512006 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:16:22.512019 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:16:22.512031 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:16:22.512044 | orchestrator | 2025-05-31 16:16:22.512056 | orchestrator | 2025-05-31 16:16:22.512068 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 16:16:22.512081 | orchestrator | Saturday 31 May 2025 16:16:05 +0000 (0:00:07.894) 0:00:14.835 ********** 2025-05-31 16:16:22.512120 | orchestrator | =============================================================================== 2025-05-31 16:16:22.512132 | orchestrator | memcached : Restart memcached container --------------------------------- 7.90s 2025-05-31 16:16:22.512144 | orchestrator | memcached : Check memcached container ----------------------------------- 2.45s 2025-05-31 16:16:22.512156 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.88s 2025-05-31 16:16:22.512168 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.93s 2025-05-31 16:16:22.512190 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.77s 2025-05-31 16:16:22.512202 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.36s 2025-05-31 16:16:22.512215 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.35s 2025-05-31 16:16:22.512227 | orchestrator | 2025-05-31 16:16:22.512239 | orchestrator | 2025-05-31 16:16:22.512251 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-31 16:16:22.512263 | orchestrator | 2025-05-31 16:16:22.512276 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-31 16:16:22.512287 | orchestrator | Saturday 31 May 2025 16:15:52 +0000 (0:00:00.229) 0:00:00.229 ********** 2025-05-31 16:16:22.512323 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:16:22.512334 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:16:22.512344 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:16:22.512355 | orchestrator | 2025-05-31 16:16:22.512366 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-31 16:16:22.512397 | orchestrator | Saturday 31 May 2025 16:15:52 +0000 (0:00:00.384) 0:00:00.614 ********** 2025-05-31 16:16:22.512409 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-05-31 16:16:22.512426 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-05-31 16:16:22.512437 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-05-31 16:16:22.512447 | orchestrator | 2025-05-31 16:16:22.512460 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-05-31 16:16:22.512534 | orchestrator | 2025-05-31 16:16:22.512558 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-05-31 16:16:22.512579 | orchestrator | Saturday 31 May 2025 16:15:52 +0000 (0:00:00.259) 0:00:00.873 ********** 2025-05-31 16:16:22.512599 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:16:22.512621 | orchestrator | 2025-05-31 16:16:22.512637 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-05-31 16:16:22.512648 | orchestrator | Saturday 31 May 2025 16:15:53 +0000 (0:00:01.064) 0:00:01.937 ********** 2025-05-31 16:16:22.512663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-31 16:16:22.512679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-31 16:16:22.512691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-31 16:16:22.512703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-31 16:16:22.512730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-31 16:16:22.512763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-31 16:16:22.512776 | orchestrator | 2025-05-31 16:16:22.512787 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-05-31 16:16:22.512798 | orchestrator | Saturday 31 May 2025 16:15:55 +0000 (0:00:01.221) 0:00:03.158 ********** 2025-05-31 16:16:22.512809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-31 16:16:22.512855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-31 16:16:22.512867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-31 16:16:22.512886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-31 16:16:22.512898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-31 16:16:22.512922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-31 16:16:22.512934 | orchestrator | 2025-05-31 16:16:22.512945 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-05-31 16:16:22.512956 | orchestrator | Saturday 31 May 2025 16:15:57 +0000 (0:00:02.957) 0:00:06.116 ********** 2025-05-31 16:16:22.512967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-31 16:16:22.512978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-31 16:16:22.512990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-31 16:16:22.513008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-31 16:16:22.513020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-31 16:16:22.513042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-31 16:16:22.513054 | orchestrator | 2025-05-31 16:16:22.513065 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-05-31 16:16:22.513076 | orchestrator | Saturday 31 May 2025 16:16:00 +0000 (0:00:02.991) 0:00:09.108 ********** 2025-05-31 16:16:22.513087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-31 16:16:22.513098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-31 16:16:22.513109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-31 16:16:22.513126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-31 16:16:22.513137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-31 16:16:22.513154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-31 16:16:22.513165 | orchestrator | 2025-05-31 16:16:22.513176 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-31 16:16:22.513187 | orchestrator | Saturday 31 May 2025 16:16:03 +0000 (0:00:02.035) 0:00:11.143 ********** 2025-05-31 16:16:22.513198 | orchestrator | 2025-05-31 16:16:22.513208 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-31 16:16:22.513219 | orchestrator | Saturday 31 May 2025 16:16:03 +0000 (0:00:00.054) 0:00:11.198 ********** 2025-05-31 16:16:22.513230 | orchestrator | 2025-05-31 16:16:22.513241 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-31 16:16:22.513251 | orchestrator | Saturday 31 May 2025 16:16:03 +0000 (0:00:00.051) 0:00:11.249 ********** 2025-05-31 16:16:22.513262 | orchestrator | 2025-05-31 16:16:22.513272 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-05-31 16:16:22.513283 | orchestrator | Saturday 31 May 2025 16:16:03 +0000 (0:00:00.051) 0:00:11.300 ********** 2025-05-31 16:16:22.513340 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:16:22.513353 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:16:22.513363 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:16:22.513374 | orchestrator | 2025-05-31 16:16:22.513385 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-05-31 16:16:22.513396 | orchestrator | Saturday 31 May 2025 16:16:11 +0000 (0:00:08.337) 0:00:19.638 ********** 2025-05-31 16:16:22.513406 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:16:22.513424 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:16:22.513435 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:16:22.513445 | orchestrator | 2025-05-31 16:16:22.513456 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:16:22.513467 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:16:22.513484 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:16:22.513496 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:16:22.513507 | orchestrator | 2025-05-31 16:16:22.513517 | orchestrator | 2025-05-31 16:16:22.513528 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 16:16:22.513539 | orchestrator | Saturday 31 May 2025 16:16:19 +0000 (0:00:08.164) 0:00:27.803 ********** 2025-05-31 16:16:22.513549 | orchestrator | =============================================================================== 2025-05-31 16:16:22.513560 | orchestrator | redis : Restart redis container ----------------------------------------- 8.34s 2025-05-31 16:16:22.513571 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 8.17s 2025-05-31 16:16:22.513581 | orchestrator | redis : Copying over redis config files --------------------------------- 2.99s 2025-05-31 16:16:22.513592 | orchestrator | redis : Copying over default config.json files -------------------------- 2.96s 2025-05-31 16:16:22.513608 | orchestrator | redis : Check redis containers ------------------------------------------ 2.04s 2025-05-31 16:16:22.513627 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.22s 2025-05-31 16:16:22.513645 | orchestrator | redis : include_tasks --------------------------------------------------- 1.06s 2025-05-31 16:16:22.513662 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.38s 2025-05-31 16:16:22.513680 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.26s 2025-05-31 16:16:22.513698 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.16s 2025-05-31 16:16:22.513871 | orchestrator | 2025-05-31 16:16:22 | INFO  | Task 7b07c6f8-5e3f-4dff-9686-6aa75e183258 is in state STARTED 2025-05-31 16:16:22.513899 | orchestrator | 2025-05-31 16:16:22 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:16:22.513918 | orchestrator | 2025-05-31 16:16:22 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:16:25.559947 | orchestrator | 2025-05-31 16:16:25 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:16:25.560454 | orchestrator | 2025-05-31 16:16:25 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state STARTED 2025-05-31 16:16:25.563095 | orchestrator | 2025-05-31 16:16:25 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:16:25.563176 | orchestrator | 2025-05-31 16:16:25 | INFO  | Task 7b07c6f8-5e3f-4dff-9686-6aa75e183258 is in state STARTED 2025-05-31 16:16:25.563198 | orchestrator | 2025-05-31 16:16:25 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:16:25.563285 | orchestrator | 2025-05-31 16:16:25 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:16:28.597860 | orchestrator | 2025-05-31 16:16:28 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:16:28.597945 | orchestrator | 2025-05-31 16:16:28 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state STARTED 2025-05-31 16:16:28.597961 | orchestrator | 2025-05-31 16:16:28 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:16:28.597988 | orchestrator | 2025-05-31 16:16:28 | INFO  | Task 7b07c6f8-5e3f-4dff-9686-6aa75e183258 is in state STARTED 2025-05-31 16:16:28.598098 | orchestrator | 2025-05-31 16:16:28 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:16:28.598114 | orchestrator | 2025-05-31 16:16:28 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:16:31.635378 | orchestrator | 2025-05-31 16:16:31 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:16:31.636119 | orchestrator | 2025-05-31 16:16:31 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state STARTED 2025-05-31 16:16:31.637006 | orchestrator | 2025-05-31 16:16:31 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:16:31.639347 | orchestrator | 2025-05-31 16:16:31 | INFO  | Task 7b07c6f8-5e3f-4dff-9686-6aa75e183258 is in state STARTED 2025-05-31 16:16:31.639765 | orchestrator | 2025-05-31 16:16:31 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:16:31.639934 | orchestrator | 2025-05-31 16:16:31 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:16:34.671611 | orchestrator | 2025-05-31 16:16:34 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:16:34.673160 | orchestrator | 2025-05-31 16:16:34 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state STARTED 2025-05-31 16:16:34.674317 | orchestrator | 2025-05-31 16:16:34 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:16:34.675126 | orchestrator | 2025-05-31 16:16:34 | INFO  | Task 7b07c6f8-5e3f-4dff-9686-6aa75e183258 is in state STARTED 2025-05-31 16:16:34.676664 | orchestrator | 2025-05-31 16:16:34 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:16:34.676690 | orchestrator | 2025-05-31 16:16:34 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:16:37.706933 | orchestrator | 2025-05-31 16:16:37 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:16:37.707166 | orchestrator | 2025-05-31 16:16:37 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state STARTED 2025-05-31 16:16:37.707977 | orchestrator | 2025-05-31 16:16:37 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:16:37.708710 | orchestrator | 2025-05-31 16:16:37 | INFO  | Task 7b07c6f8-5e3f-4dff-9686-6aa75e183258 is in state STARTED 2025-05-31 16:16:37.709472 | orchestrator | 2025-05-31 16:16:37 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:16:37.709492 | orchestrator | 2025-05-31 16:16:37 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:16:40.740827 | orchestrator | 2025-05-31 16:16:40 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:16:40.741661 | orchestrator | 2025-05-31 16:16:40 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state STARTED 2025-05-31 16:16:40.742231 | orchestrator | 2025-05-31 16:16:40 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:16:40.743161 | orchestrator | 2025-05-31 16:16:40 | INFO  | Task 7b07c6f8-5e3f-4dff-9686-6aa75e183258 is in state STARTED 2025-05-31 16:16:40.744460 | orchestrator | 2025-05-31 16:16:40 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:16:40.744501 | orchestrator | 2025-05-31 16:16:40 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:16:43.789417 | orchestrator | 2025-05-31 16:16:43 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:16:43.790351 | orchestrator | 2025-05-31 16:16:43 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state STARTED 2025-05-31 16:16:43.791219 | orchestrator | 2025-05-31 16:16:43 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:16:43.793191 | orchestrator | 2025-05-31 16:16:43 | INFO  | Task 7b07c6f8-5e3f-4dff-9686-6aa75e183258 is in state STARTED 2025-05-31 16:16:43.794106 | orchestrator | 2025-05-31 16:16:43 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:16:43.794869 | orchestrator | 2025-05-31 16:16:43 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:16:46.838777 | orchestrator | 2025-05-31 16:16:46 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:16:46.840454 | orchestrator | 2025-05-31 16:16:46 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state STARTED 2025-05-31 16:16:46.841146 | orchestrator | 2025-05-31 16:16:46 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:16:46.841928 | orchestrator | 2025-05-31 16:16:46 | INFO  | Task 7b07c6f8-5e3f-4dff-9686-6aa75e183258 is in state STARTED 2025-05-31 16:16:46.843760 | orchestrator | 2025-05-31 16:16:46 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:16:46.843787 | orchestrator | 2025-05-31 16:16:46 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:16:49.922220 | orchestrator | 2025-05-31 16:16:49 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:16:49.922443 | orchestrator | 2025-05-31 16:16:49 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state STARTED 2025-05-31 16:16:49.923160 | orchestrator | 2025-05-31 16:16:49 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:16:49.923653 | orchestrator | 2025-05-31 16:16:49 | INFO  | Task 7b07c6f8-5e3f-4dff-9686-6aa75e183258 is in state STARTED 2025-05-31 16:16:49.924585 | orchestrator | 2025-05-31 16:16:49 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:16:49.924608 | orchestrator | 2025-05-31 16:16:49 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:16:52.971692 | orchestrator | 2025-05-31 16:16:52 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:16:52.974362 | orchestrator | 2025-05-31 16:16:52 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state STARTED 2025-05-31 16:16:52.977515 | orchestrator | 2025-05-31 16:16:52 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:16:52.977549 | orchestrator | 2025-05-31 16:16:52 | INFO  | Task 7b07c6f8-5e3f-4dff-9686-6aa75e183258 is in state STARTED 2025-05-31 16:16:52.978518 | orchestrator | 2025-05-31 16:16:52 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:16:52.978664 | orchestrator | 2025-05-31 16:16:52 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:16:56.016521 | orchestrator | 2025-05-31 16:16:56 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:16:56.017990 | orchestrator | 2025-05-31 16:16:56 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state STARTED 2025-05-31 16:16:56.019688 | orchestrator | 2025-05-31 16:16:56 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:16:56.021627 | orchestrator | 2025-05-31 16:16:56 | INFO  | Task 7b07c6f8-5e3f-4dff-9686-6aa75e183258 is in state STARTED 2025-05-31 16:16:56.021662 | orchestrator | 2025-05-31 16:16:56 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:16:56.021676 | orchestrator | 2025-05-31 16:16:56 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:16:59.053507 | orchestrator | 2025-05-31 16:16:59 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:16:59.057965 | orchestrator | 2025-05-31 16:16:59 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state STARTED 2025-05-31 16:16:59.058335 | orchestrator | 2025-05-31 16:16:59 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:16:59.058868 | orchestrator | 2025-05-31 16:16:59 | INFO  | Task 7b07c6f8-5e3f-4dff-9686-6aa75e183258 is in state STARTED 2025-05-31 16:16:59.061784 | orchestrator | 2025-05-31 16:16:59 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:16:59.061832 | orchestrator | 2025-05-31 16:16:59 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:17:02.087778 | orchestrator | 2025-05-31 16:17:02 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:17:02.088103 | orchestrator | 2025-05-31 16:17:02 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state STARTED 2025-05-31 16:17:02.088780 | orchestrator | 2025-05-31 16:17:02 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:17:02.089375 | orchestrator | 2025-05-31 16:17:02 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:17:02.090284 | orchestrator | 2025-05-31 16:17:02 | INFO  | Task 7b07c6f8-5e3f-4dff-9686-6aa75e183258 is in state SUCCESS 2025-05-31 16:17:02.093555 | orchestrator | 2025-05-31 16:17:02 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:17:02.093591 | orchestrator | 2025-05-31 16:17:02 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:17:02.094781 | orchestrator | 2025-05-31 16:17:02.094870 | orchestrator | 2025-05-31 16:17:02.094884 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-31 16:17:02.094895 | orchestrator | 2025-05-31 16:17:02.094919 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-31 16:17:02.094931 | orchestrator | Saturday 31 May 2025 16:15:51 +0000 (0:00:00.302) 0:00:00.302 ********** 2025-05-31 16:17:02.094942 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:17:02.094954 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:17:02.094965 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:17:02.094975 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:17:02.094986 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:17:02.095022 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:17:02.095035 | orchestrator | 2025-05-31 16:17:02.095046 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-31 16:17:02.095057 | orchestrator | Saturday 31 May 2025 16:15:51 +0000 (0:00:00.736) 0:00:01.038 ********** 2025-05-31 16:17:02.095068 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-31 16:17:02.095079 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-31 16:17:02.095089 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-31 16:17:02.095100 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-31 16:17:02.095111 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-31 16:17:02.095121 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-31 16:17:02.096149 | orchestrator | 2025-05-31 16:17:02.096178 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-05-31 16:17:02.096189 | orchestrator | 2025-05-31 16:17:02.096200 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-05-31 16:17:02.096211 | orchestrator | Saturday 31 May 2025 16:15:52 +0000 (0:00:00.894) 0:00:01.933 ********** 2025-05-31 16:17:02.096223 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:17:02.096259 | orchestrator | 2025-05-31 16:17:02.096271 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-31 16:17:02.096282 | orchestrator | Saturday 31 May 2025 16:15:54 +0000 (0:00:01.416) 0:00:03.349 ********** 2025-05-31 16:17:02.096312 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-31 16:17:02.096325 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-31 16:17:02.096336 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-31 16:17:02.096347 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-31 16:17:02.096357 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-31 16:17:02.096368 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-31 16:17:02.096379 | orchestrator | 2025-05-31 16:17:02.096389 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-31 16:17:02.096400 | orchestrator | Saturday 31 May 2025 16:15:55 +0000 (0:00:01.236) 0:00:04.585 ********** 2025-05-31 16:17:02.096411 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-31 16:17:02.096422 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-31 16:17:02.096433 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-31 16:17:02.096443 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-31 16:17:02.096454 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-31 16:17:02.096464 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-31 16:17:02.096475 | orchestrator | 2025-05-31 16:17:02.096486 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-31 16:17:02.096496 | orchestrator | Saturday 31 May 2025 16:15:57 +0000 (0:00:02.069) 0:00:06.655 ********** 2025-05-31 16:17:02.096507 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-05-31 16:17:02.096518 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-05-31 16:17:02.096528 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:17:02.096554 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-05-31 16:17:02.096565 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:17:02.096576 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-05-31 16:17:02.096587 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:17:02.096598 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-05-31 16:17:02.096608 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:17:02.096619 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:17:02.096630 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-05-31 16:17:02.096640 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:17:02.096651 | orchestrator | 2025-05-31 16:17:02.096662 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-05-31 16:17:02.096673 | orchestrator | Saturday 31 May 2025 16:15:58 +0000 (0:00:01.306) 0:00:07.962 ********** 2025-05-31 16:17:02.096683 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:17:02.096694 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:17:02.096705 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:17:02.096715 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:17:02.096726 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:17:02.096737 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:17:02.096747 | orchestrator | 2025-05-31 16:17:02.096758 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-05-31 16:17:02.096769 | orchestrator | Saturday 31 May 2025 16:15:59 +0000 (0:00:00.551) 0:00:08.513 ********** 2025-05-31 16:17:02.096806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-31 16:17:02.096844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-31 16:17:02.096857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-31 16:17:02.096869 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-31 16:17:02.096880 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-31 16:17:02.096899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-31 16:17:02.096922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-31 16:17:02.096934 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-31 16:17:02.096945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-31 16:17:02.096956 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-31 16:17:02.096968 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-31 16:17:02.097001 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-31 16:17:02.097021 | orchestrator | 2025-05-31 16:17:02.097033 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-05-31 16:17:02.097044 | orchestrator | Saturday 31 May 2025 16:16:01 +0000 (0:00:02.111) 0:00:10.625 ********** 2025-05-31 16:17:02.097056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-31 16:17:02.097067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-31 16:17:02.097079 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-31 16:17:02.097090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-31 16:17:02.097101 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-31 16:17:02.097140 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-31 16:17:02.097154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-31 16:17:02.097166 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-31 16:17:02.097177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-31 16:17:02.097188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-31 16:17:02.097212 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-31 16:17:02.097235 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-31 16:17:02.097247 | orchestrator | 2025-05-31 16:17:02.097258 | orchestrator | TASK [openvswitch : Copying over start-ovs file for openvswitch-vswitchd] ****** 2025-05-31 16:17:02.097269 | orchestrator | Saturday 31 May 2025 16:16:04 +0000 (0:00:02.496) 0:00:13.121 ********** 2025-05-31 16:17:02.097280 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:17:02.097305 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:17:02.097317 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:17:02.097328 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:17:02.097339 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:17:02.097350 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:17:02.097361 | orchestrator | 2025-05-31 16:17:02.097372 | orchestrator | TASK [openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server] *** 2025-05-31 16:17:02.097383 | orchestrator | Saturday 31 May 2025 16:16:06 +0000 (0:00:01.936) 0:00:15.058 ********** 2025-05-31 16:17:02.097393 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:17:02.097404 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:17:02.097415 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:17:02.097425 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:17:02.097436 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:17:02.097447 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:17:02.097457 | orchestrator | 2025-05-31 16:17:02.097468 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-05-31 16:17:02.097479 | orchestrator | Saturday 31 May 2025 16:16:08 +0000 (0:00:02.370) 0:00:17.428 ********** 2025-05-31 16:17:02.097490 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:17:02.097501 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:17:02.097511 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:17:02.097522 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:17:02.097532 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:17:02.097543 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:17:02.097554 | orchestrator | 2025-05-31 16:17:02.097565 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-05-31 16:17:02.097576 | orchestrator | Saturday 31 May 2025 16:16:09 +0000 (0:00:01.256) 0:00:18.685 ********** 2025-05-31 16:17:02.097587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-31 16:17:02.097605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-31 16:17:02.097639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-31 16:17:02.097653 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-31 16:17:02.097665 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-31 16:17:02.097676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-31 16:17:02.097693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-31 16:17:02.097705 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-31 16:17:02.097735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-31 16:17:02.097749 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-31 16:17:02.097760 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-31 16:17:02.097771 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-31 16:17:02.097799 | orchestrator | 2025-05-31 16:17:02.097811 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-31 16:17:02.097822 | orchestrator | Saturday 31 May 2025 16:16:13 +0000 (0:00:03.423) 0:00:22.108 ********** 2025-05-31 16:17:02.097833 | orchestrator | 2025-05-31 16:17:02.097844 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-31 16:17:02.097855 | orchestrator | Saturday 31 May 2025 16:16:13 +0000 (0:00:00.316) 0:00:22.425 ********** 2025-05-31 16:17:02.097865 | orchestrator | 2025-05-31 16:17:02.097876 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-31 16:17:02.097887 | orchestrator | Saturday 31 May 2025 16:16:13 +0000 (0:00:00.244) 0:00:22.670 ********** 2025-05-31 16:17:02.097897 | orchestrator | 2025-05-31 16:17:02.097908 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-31 16:17:02.097925 | orchestrator | Saturday 31 May 2025 16:16:13 +0000 (0:00:00.126) 0:00:22.797 ********** 2025-05-31 16:17:02.097936 | orchestrator | 2025-05-31 16:17:02.097947 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-31 16:17:02.097958 | orchestrator | Saturday 31 May 2025 16:16:13 +0000 (0:00:00.187) 0:00:22.984 ********** 2025-05-31 16:17:02.097969 | orchestrator | 2025-05-31 16:17:02.097979 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-31 16:17:02.097990 | orchestrator | Saturday 31 May 2025 16:16:14 +0000 (0:00:00.101) 0:00:23.086 ********** 2025-05-31 16:17:02.098001 | orchestrator | 2025-05-31 16:17:02.098011 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-05-31 16:17:02.098074 | orchestrator | Saturday 31 May 2025 16:16:14 +0000 (0:00:00.332) 0:00:23.418 ********** 2025-05-31 16:17:02.098085 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:17:02.098096 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:17:02.098107 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:17:02.098118 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:17:02.098128 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:17:02.098139 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:17:02.098150 | orchestrator | 2025-05-31 16:17:02.098160 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-05-31 16:17:02.098171 | orchestrator | Saturday 31 May 2025 16:16:25 +0000 (0:00:10.888) 0:00:34.307 ********** 2025-05-31 16:17:02.098421 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:17:02.098500 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:17:02.098514 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:17:02.098526 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:17:02.098550 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:17:02.098561 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:17:02.098572 | orchestrator | 2025-05-31 16:17:02.098584 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-31 16:17:02.098596 | orchestrator | Saturday 31 May 2025 16:16:27 +0000 (0:00:01.924) 0:00:36.232 ********** 2025-05-31 16:17:02.098607 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:17:02.098619 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:17:02.098629 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:17:02.098640 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:17:02.098651 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:17:02.098661 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:17:02.098672 | orchestrator | 2025-05-31 16:17:02.098683 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-05-31 16:17:02.098694 | orchestrator | Saturday 31 May 2025 16:16:36 +0000 (0:00:09.444) 0:00:45.676 ********** 2025-05-31 16:17:02.098706 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-05-31 16:17:02.098739 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-05-31 16:17:02.098751 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-05-31 16:17:02.098762 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-05-31 16:17:02.098773 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-05-31 16:17:02.098784 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-05-31 16:17:02.098794 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-05-31 16:17:02.098805 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-05-31 16:17:02.098815 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-05-31 16:17:02.098826 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-05-31 16:17:02.098837 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-05-31 16:17:02.098847 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-05-31 16:17:02.098858 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-31 16:17:02.098869 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-31 16:17:02.098879 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-31 16:17:02.098890 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-31 16:17:02.098901 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-31 16:17:02.098911 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-31 16:17:02.098922 | orchestrator | 2025-05-31 16:17:02.098933 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-05-31 16:17:02.098944 | orchestrator | Saturday 31 May 2025 16:16:44 +0000 (0:00:08.129) 0:00:53.806 ********** 2025-05-31 16:17:02.098954 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-05-31 16:17:02.098965 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:17:02.098976 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-05-31 16:17:02.098989 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:17:02.099003 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-05-31 16:17:02.099015 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:17:02.099028 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-05-31 16:17:02.099040 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-05-31 16:17:02.099053 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-05-31 16:17:02.099063 | orchestrator | 2025-05-31 16:17:02.099074 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-05-31 16:17:02.099085 | orchestrator | Saturday 31 May 2025 16:16:47 +0000 (0:00:02.348) 0:00:56.154 ********** 2025-05-31 16:17:02.099096 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-05-31 16:17:02.099107 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:17:02.099118 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-05-31 16:17:02.099129 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:17:02.099146 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-05-31 16:17:02.099157 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:17:02.099168 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-05-31 16:17:02.099193 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-05-31 16:17:02.099205 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-05-31 16:17:02.099216 | orchestrator | 2025-05-31 16:17:02.099231 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-31 16:17:02.099242 | orchestrator | Saturday 31 May 2025 16:16:51 +0000 (0:00:03.904) 0:01:00.059 ********** 2025-05-31 16:17:02.099253 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:17:02.099264 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:17:02.099275 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:17:02.099285 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:17:02.099330 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:17:02.099342 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:17:02.099352 | orchestrator | 2025-05-31 16:17:02.099363 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:17:02.099375 | orchestrator | testbed-node-0 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-31 16:17:02.099387 | orchestrator | testbed-node-1 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-31 16:17:02.099398 | orchestrator | testbed-node-2 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-31 16:17:02.099410 | orchestrator | testbed-node-3 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-31 16:17:02.099421 | orchestrator | testbed-node-4 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-31 16:17:02.099432 | orchestrator | testbed-node-5 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-31 16:17:02.099442 | orchestrator | 2025-05-31 16:17:02.099453 | orchestrator | 2025-05-31 16:17:02.099464 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 16:17:02.099476 | orchestrator | Saturday 31 May 2025 16:16:59 +0000 (0:00:08.691) 0:01:08.751 ********** 2025-05-31 16:17:02.099486 | orchestrator | =============================================================================== 2025-05-31 16:17:02.099497 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 18.14s 2025-05-31 16:17:02.099508 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.89s 2025-05-31 16:17:02.099519 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.13s 2025-05-31 16:17:02.099536 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.90s 2025-05-31 16:17:02.099556 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.42s 2025-05-31 16:17:02.099574 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.50s 2025-05-31 16:17:02.099593 | orchestrator | openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server --- 2.37s 2025-05-31 16:17:02.099611 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.35s 2025-05-31 16:17:02.099628 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.11s 2025-05-31 16:17:02.099647 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.07s 2025-05-31 16:17:02.099666 | orchestrator | openvswitch : Copying over start-ovs file for openvswitch-vswitchd ------ 1.94s 2025-05-31 16:17:02.099681 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.92s 2025-05-31 16:17:02.099701 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.42s 2025-05-31 16:17:02.099712 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.31s 2025-05-31 16:17:02.099723 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.31s 2025-05-31 16:17:02.099733 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.26s 2025-05-31 16:17:02.099744 | orchestrator | module-load : Load modules ---------------------------------------------- 1.24s 2025-05-31 16:17:02.099754 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.89s 2025-05-31 16:17:02.099765 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.74s 2025-05-31 16:17:02.099776 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.55s 2025-05-31 16:17:05.121215 | orchestrator | 2025-05-31 16:17:05 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:17:05.121609 | orchestrator | 2025-05-31 16:17:05 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state STARTED 2025-05-31 16:17:05.123281 | orchestrator | 2025-05-31 16:17:05 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:17:05.126048 | orchestrator | 2025-05-31 16:17:05 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:17:05.127976 | orchestrator | 2025-05-31 16:17:05 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:17:05.127987 | orchestrator | 2025-05-31 16:17:05 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:17:08.155492 | orchestrator | 2025-05-31 16:17:08 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:17:08.155837 | orchestrator | 2025-05-31 16:17:08 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state STARTED 2025-05-31 16:17:08.156476 | orchestrator | 2025-05-31 16:17:08 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:17:08.157455 | orchestrator | 2025-05-31 16:17:08 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:17:08.158010 | orchestrator | 2025-05-31 16:17:08 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:17:08.158165 | orchestrator | 2025-05-31 16:17:08 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:17:11.200344 | orchestrator | 2025-05-31 16:17:11 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:17:11.202227 | orchestrator | 2025-05-31 16:17:11 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state STARTED 2025-05-31 16:17:11.202564 | orchestrator | 2025-05-31 16:17:11 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:17:11.203141 | orchestrator | 2025-05-31 16:17:11 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:17:11.209137 | orchestrator | 2025-05-31 16:17:11 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:17:11.209216 | orchestrator | 2025-05-31 16:17:11 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:17:14.237745 | orchestrator | 2025-05-31 16:17:14 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:17:14.240177 | orchestrator | 2025-05-31 16:17:14 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state STARTED 2025-05-31 16:17:14.241113 | orchestrator | 2025-05-31 16:17:14 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:17:14.241496 | orchestrator | 2025-05-31 16:17:14 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:17:14.242820 | orchestrator | 2025-05-31 16:17:14 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:17:14.242843 | orchestrator | 2025-05-31 16:17:14 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:17:17.286229 | orchestrator | 2025-05-31 16:17:17 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:17:17.286436 | orchestrator | 2025-05-31 16:17:17 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state STARTED 2025-05-31 16:17:17.286528 | orchestrator | 2025-05-31 16:17:17 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:17:17.287243 | orchestrator | 2025-05-31 16:17:17 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:17:17.288008 | orchestrator | 2025-05-31 16:17:17 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:17:17.288148 | orchestrator | 2025-05-31 16:17:17 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:17:20.329901 | orchestrator | 2025-05-31 16:17:20 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:17:20.332589 | orchestrator | 2025-05-31 16:17:20 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state STARTED 2025-05-31 16:17:20.334451 | orchestrator | 2025-05-31 16:17:20 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:17:20.336644 | orchestrator | 2025-05-31 16:17:20 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:17:20.339396 | orchestrator | 2025-05-31 16:17:20 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:17:20.339784 | orchestrator | 2025-05-31 16:17:20 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:17:23.375107 | orchestrator | 2025-05-31 16:17:23 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:17:23.375523 | orchestrator | 2025-05-31 16:17:23 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state STARTED 2025-05-31 16:17:23.376862 | orchestrator | 2025-05-31 16:17:23 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:17:23.377097 | orchestrator | 2025-05-31 16:17:23 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:17:23.380554 | orchestrator | 2025-05-31 16:17:23 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:17:23.380682 | orchestrator | 2025-05-31 16:17:23 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:17:26.427610 | orchestrator | 2025-05-31 16:17:26 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:17:26.428060 | orchestrator | 2025-05-31 16:17:26 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state STARTED 2025-05-31 16:17:26.430751 | orchestrator | 2025-05-31 16:17:26 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:17:26.434372 | orchestrator | 2025-05-31 16:17:26 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:17:26.438881 | orchestrator | 2025-05-31 16:17:26 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:17:26.438919 | orchestrator | 2025-05-31 16:17:26 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:17:29.470109 | orchestrator | 2025-05-31 16:17:29 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:17:29.470195 | orchestrator | 2025-05-31 16:17:29 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state STARTED 2025-05-31 16:17:29.470661 | orchestrator | 2025-05-31 16:17:29 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:17:29.471362 | orchestrator | 2025-05-31 16:17:29 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:17:29.472200 | orchestrator | 2025-05-31 16:17:29 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:17:29.472225 | orchestrator | 2025-05-31 16:17:29 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:17:32.507601 | orchestrator | 2025-05-31 16:17:32 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:17:32.509071 | orchestrator | 2025-05-31 16:17:32 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state STARTED 2025-05-31 16:17:32.511014 | orchestrator | 2025-05-31 16:17:32 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:17:32.512611 | orchestrator | 2025-05-31 16:17:32 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:17:32.514154 | orchestrator | 2025-05-31 16:17:32 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:17:32.514233 | orchestrator | 2025-05-31 16:17:32 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:17:35.545624 | orchestrator | 2025-05-31 16:17:35 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:17:35.547360 | orchestrator | 2025-05-31 16:17:35 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state STARTED 2025-05-31 16:17:35.547503 | orchestrator | 2025-05-31 16:17:35 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:17:35.548383 | orchestrator | 2025-05-31 16:17:35 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:17:35.549377 | orchestrator | 2025-05-31 16:17:35 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:17:35.549409 | orchestrator | 2025-05-31 16:17:35 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:17:38.586571 | orchestrator | 2025-05-31 16:17:38 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:17:38.588084 | orchestrator | 2025-05-31 16:17:38 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state STARTED 2025-05-31 16:17:38.588178 | orchestrator | 2025-05-31 16:17:38 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:17:38.588358 | orchestrator | 2025-05-31 16:17:38 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:17:38.589204 | orchestrator | 2025-05-31 16:17:38 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:17:38.589357 | orchestrator | 2025-05-31 16:17:38 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:17:41.621897 | orchestrator | 2025-05-31 16:17:41 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:17:41.625377 | orchestrator | 2025-05-31 16:17:41 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state STARTED 2025-05-31 16:17:41.627244 | orchestrator | 2025-05-31 16:17:41 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:17:41.628871 | orchestrator | 2025-05-31 16:17:41 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:17:41.631936 | orchestrator | 2025-05-31 16:17:41 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:17:41.632044 | orchestrator | 2025-05-31 16:17:41 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:17:44.665989 | orchestrator | 2025-05-31 16:17:44 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:17:44.666700 | orchestrator | 2025-05-31 16:17:44 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state STARTED 2025-05-31 16:17:44.667221 | orchestrator | 2025-05-31 16:17:44 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:17:44.668543 | orchestrator | 2025-05-31 16:17:44 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:17:44.669438 | orchestrator | 2025-05-31 16:17:44 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:17:44.669482 | orchestrator | 2025-05-31 16:17:44 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:17:47.704793 | orchestrator | 2025-05-31 16:17:47 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:17:47.707153 | orchestrator | 2025-05-31 16:17:47 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state STARTED 2025-05-31 16:17:47.707793 | orchestrator | 2025-05-31 16:17:47 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:17:47.709695 | orchestrator | 2025-05-31 16:17:47 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:17:47.710338 | orchestrator | 2025-05-31 16:17:47 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:17:47.710767 | orchestrator | 2025-05-31 16:17:47 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:17:50.750157 | orchestrator | 2025-05-31 16:17:50 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:17:50.750267 | orchestrator | 2025-05-31 16:17:50 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state STARTED 2025-05-31 16:17:50.750491 | orchestrator | 2025-05-31 16:17:50 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:17:50.751221 | orchestrator | 2025-05-31 16:17:50 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:17:50.752119 | orchestrator | 2025-05-31 16:17:50 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:17:50.752147 | orchestrator | 2025-05-31 16:17:50 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:17:53.784852 | orchestrator | 2025-05-31 16:17:53 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:17:53.785024 | orchestrator | 2025-05-31 16:17:53 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state STARTED 2025-05-31 16:17:53.789026 | orchestrator | 2025-05-31 16:17:53 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:17:53.789068 | orchestrator | 2025-05-31 16:17:53 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:17:53.789077 | orchestrator | 2025-05-31 16:17:53 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:17:53.789085 | orchestrator | 2025-05-31 16:17:53 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:17:56.816694 | orchestrator | 2025-05-31 16:17:56 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:17:56.817401 | orchestrator | 2025-05-31 16:17:56 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state STARTED 2025-05-31 16:17:56.818529 | orchestrator | 2025-05-31 16:17:56 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:17:56.819491 | orchestrator | 2025-05-31 16:17:56 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:17:56.820249 | orchestrator | 2025-05-31 16:17:56 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:17:56.820600 | orchestrator | 2025-05-31 16:17:56 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:17:59.855161 | orchestrator | 2025-05-31 16:17:59 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:17:59.856037 | orchestrator | 2025-05-31 16:17:59 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state STARTED 2025-05-31 16:17:59.856736 | orchestrator | 2025-05-31 16:17:59 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:17:59.857482 | orchestrator | 2025-05-31 16:17:59 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:17:59.860928 | orchestrator | 2025-05-31 16:17:59 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:17:59.860978 | orchestrator | 2025-05-31 16:17:59 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:18:02.895698 | orchestrator | 2025-05-31 16:18:02 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:18:02.897665 | orchestrator | 2025-05-31 16:18:02 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state STARTED 2025-05-31 16:18:02.897699 | orchestrator | 2025-05-31 16:18:02 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:18:02.897713 | orchestrator | 2025-05-31 16:18:02 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:18:02.897823 | orchestrator | 2025-05-31 16:18:02 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:18:02.897841 | orchestrator | 2025-05-31 16:18:02 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:18:05.929655 | orchestrator | 2025-05-31 16:18:05 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:18:05.930637 | orchestrator | 2025-05-31 16:18:05 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state STARTED 2025-05-31 16:18:05.931815 | orchestrator | 2025-05-31 16:18:05 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:18:05.932822 | orchestrator | 2025-05-31 16:18:05 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:18:05.933927 | orchestrator | 2025-05-31 16:18:05 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:18:05.934000 | orchestrator | 2025-05-31 16:18:05 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:18:08.977030 | orchestrator | 2025-05-31 16:18:08 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:18:08.978908 | orchestrator | 2025-05-31 16:18:08 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state STARTED 2025-05-31 16:18:08.982011 | orchestrator | 2025-05-31 16:18:08 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:18:08.984686 | orchestrator | 2025-05-31 16:18:08 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:18:08.987025 | orchestrator | 2025-05-31 16:18:08 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:18:08.987666 | orchestrator | 2025-05-31 16:18:08 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:18:12.040179 | orchestrator | 2025-05-31 16:18:12 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:18:12.040279 | orchestrator | 2025-05-31 16:18:12 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state STARTED 2025-05-31 16:18:12.040747 | orchestrator | 2025-05-31 16:18:12 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:18:12.044307 | orchestrator | 2025-05-31 16:18:12 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:18:12.044339 | orchestrator | 2025-05-31 16:18:12 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:18:12.044351 | orchestrator | 2025-05-31 16:18:12 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:18:15.084167 | orchestrator | 2025-05-31 16:18:15 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:18:15.084997 | orchestrator | 2025-05-31 16:18:15 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state STARTED 2025-05-31 16:18:15.087740 | orchestrator | 2025-05-31 16:18:15 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:18:15.090354 | orchestrator | 2025-05-31 16:18:15 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:18:15.091352 | orchestrator | 2025-05-31 16:18:15 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:18:15.091389 | orchestrator | 2025-05-31 16:18:15 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:18:18.121193 | orchestrator | 2025-05-31 16:18:18 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:18:18.121757 | orchestrator | 2025-05-31 16:18:18 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state STARTED 2025-05-31 16:18:18.124273 | orchestrator | 2025-05-31 16:18:18 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:18:18.125808 | orchestrator | 2025-05-31 16:18:18 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:18:18.125967 | orchestrator | 2025-05-31 16:18:18 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:18:18.125992 | orchestrator | 2025-05-31 16:18:18 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:18:21.169356 | orchestrator | 2025-05-31 16:18:21 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:18:21.173094 | orchestrator | 2025-05-31 16:18:21 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state STARTED 2025-05-31 16:18:21.175562 | orchestrator | 2025-05-31 16:18:21 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:18:21.177122 | orchestrator | 2025-05-31 16:18:21 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:18:21.178670 | orchestrator | 2025-05-31 16:18:21 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:18:21.178698 | orchestrator | 2025-05-31 16:18:21 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:18:24.226347 | orchestrator | 2025-05-31 16:18:24 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:18:24.227842 | orchestrator | 2025-05-31 16:18:24 | INFO  | Task ef3aec72-1fb9-4e25-9dc3-8cc976911e74 is in state SUCCESS 2025-05-31 16:18:24.229128 | orchestrator | 2025-05-31 16:18:24.229167 | orchestrator | 2025-05-31 16:18:24.229181 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-05-31 16:18:24.229193 | orchestrator | 2025-05-31 16:18:24.229205 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-05-31 16:18:24.229217 | orchestrator | Saturday 31 May 2025 16:16:09 +0000 (0:00:00.111) 0:00:00.111 ********** 2025-05-31 16:18:24.229228 | orchestrator | ok: [localhost] => { 2025-05-31 16:18:24.229242 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-05-31 16:18:24.229253 | orchestrator | } 2025-05-31 16:18:24.229265 | orchestrator | 2025-05-31 16:18:24.229308 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-05-31 16:18:24.229350 | orchestrator | Saturday 31 May 2025 16:16:09 +0000 (0:00:00.039) 0:00:00.151 ********** 2025-05-31 16:18:24.229363 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-05-31 16:18:24.229375 | orchestrator | ...ignoring 2025-05-31 16:18:24.229386 | orchestrator | 2025-05-31 16:18:24.229397 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-05-31 16:18:24.229408 | orchestrator | Saturday 31 May 2025 16:16:12 +0000 (0:00:03.056) 0:00:03.207 ********** 2025-05-31 16:18:24.229418 | orchestrator | skipping: [localhost] 2025-05-31 16:18:24.229429 | orchestrator | 2025-05-31 16:18:24.229440 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-05-31 16:18:24.229450 | orchestrator | Saturday 31 May 2025 16:16:12 +0000 (0:00:00.103) 0:00:03.311 ********** 2025-05-31 16:18:24.229460 | orchestrator | ok: [localhost] 2025-05-31 16:18:24.229471 | orchestrator | 2025-05-31 16:18:24.229482 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-31 16:18:24.229492 | orchestrator | 2025-05-31 16:18:24.229503 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-31 16:18:24.229513 | orchestrator | Saturday 31 May 2025 16:16:13 +0000 (0:00:00.415) 0:00:03.726 ********** 2025-05-31 16:18:24.229524 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:18:24.229535 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:18:24.229545 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:18:24.229556 | orchestrator | 2025-05-31 16:18:24.229573 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-31 16:18:24.229592 | orchestrator | Saturday 31 May 2025 16:16:13 +0000 (0:00:00.630) 0:00:04.356 ********** 2025-05-31 16:18:24.229611 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-05-31 16:18:24.229631 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-05-31 16:18:24.229651 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-05-31 16:18:24.229670 | orchestrator | 2025-05-31 16:18:24.229682 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-05-31 16:18:24.229693 | orchestrator | 2025-05-31 16:18:24.229704 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-31 16:18:24.229715 | orchestrator | Saturday 31 May 2025 16:16:14 +0000 (0:00:00.452) 0:00:04.809 ********** 2025-05-31 16:18:24.229726 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:18:24.229738 | orchestrator | 2025-05-31 16:18:24.229751 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-31 16:18:24.229763 | orchestrator | Saturday 31 May 2025 16:16:16 +0000 (0:00:01.862) 0:00:06.671 ********** 2025-05-31 16:18:24.229775 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:18:24.229788 | orchestrator | 2025-05-31 16:18:24.229800 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-05-31 16:18:24.229812 | orchestrator | Saturday 31 May 2025 16:16:17 +0000 (0:00:01.276) 0:00:07.947 ********** 2025-05-31 16:18:24.229824 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:18:24.229837 | orchestrator | 2025-05-31 16:18:24.229849 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-05-31 16:18:24.229860 | orchestrator | Saturday 31 May 2025 16:16:18 +0000 (0:00:00.644) 0:00:08.592 ********** 2025-05-31 16:18:24.229872 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:18:24.229884 | orchestrator | 2025-05-31 16:18:24.229896 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-05-31 16:18:24.229908 | orchestrator | Saturday 31 May 2025 16:16:18 +0000 (0:00:00.448) 0:00:09.041 ********** 2025-05-31 16:18:24.229918 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:18:24.229929 | orchestrator | 2025-05-31 16:18:24.229954 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-05-31 16:18:24.229966 | orchestrator | Saturday 31 May 2025 16:16:18 +0000 (0:00:00.260) 0:00:09.301 ********** 2025-05-31 16:18:24.229985 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:18:24.229996 | orchestrator | 2025-05-31 16:18:24.230006 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-31 16:18:24.230076 | orchestrator | Saturday 31 May 2025 16:16:19 +0000 (0:00:00.317) 0:00:09.619 ********** 2025-05-31 16:18:24.230090 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:18:24.230101 | orchestrator | 2025-05-31 16:18:24.230111 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-31 16:18:24.230122 | orchestrator | Saturday 31 May 2025 16:16:20 +0000 (0:00:01.154) 0:00:10.774 ********** 2025-05-31 16:18:24.230132 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:18:24.230143 | orchestrator | 2025-05-31 16:18:24.230153 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-05-31 16:18:24.230164 | orchestrator | Saturday 31 May 2025 16:16:21 +0000 (0:00:00.794) 0:00:11.568 ********** 2025-05-31 16:18:24.230175 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:18:24.230185 | orchestrator | 2025-05-31 16:18:24.230195 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-05-31 16:18:24.230206 | orchestrator | Saturday 31 May 2025 16:16:21 +0000 (0:00:00.315) 0:00:11.884 ********** 2025-05-31 16:18:24.230217 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:18:24.230228 | orchestrator | 2025-05-31 16:18:24.230251 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-05-31 16:18:24.230262 | orchestrator | Saturday 31 May 2025 16:16:21 +0000 (0:00:00.331) 0:00:12.215 ********** 2025-05-31 16:18:24.230340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-31 16:18:24.230361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-31 16:18:24.230433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-31 16:18:24.230457 | orchestrator | 2025-05-31 16:18:24.230468 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-05-31 16:18:24.230479 | orchestrator | Saturday 31 May 2025 16:16:22 +0000 (0:00:00.963) 0:00:13.178 ********** 2025-05-31 16:18:24.230502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-31 16:18:24.230515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-31 16:18:24.230528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-31 16:18:24.230546 | orchestrator | 2025-05-31 16:18:24.230557 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-05-31 16:18:24.230568 | orchestrator | Saturday 31 May 2025 16:16:24 +0000 (0:00:01.450) 0:00:14.629 ********** 2025-05-31 16:18:24.230578 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-31 16:18:24.230615 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-31 16:18:24.230635 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-31 16:18:24.230655 | orchestrator | 2025-05-31 16:18:24.230675 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-05-31 16:18:24.230693 | orchestrator | Saturday 31 May 2025 16:16:25 +0000 (0:00:01.766) 0:00:16.395 ********** 2025-05-31 16:18:24.230704 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-31 16:18:24.230715 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-31 16:18:24.230725 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-31 16:18:24.230736 | orchestrator | 2025-05-31 16:18:24.230746 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-05-31 16:18:24.230815 | orchestrator | Saturday 31 May 2025 16:16:28 +0000 (0:00:02.400) 0:00:18.796 ********** 2025-05-31 16:18:24.230827 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-31 16:18:24.230837 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-31 16:18:24.230847 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-31 16:18:24.230858 | orchestrator | 2025-05-31 16:18:24.230877 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-05-31 16:18:24.230888 | orchestrator | Saturday 31 May 2025 16:16:30 +0000 (0:00:02.222) 0:00:21.018 ********** 2025-05-31 16:18:24.230899 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-31 16:18:24.230910 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-31 16:18:24.230921 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-31 16:18:24.230932 | orchestrator | 2025-05-31 16:18:24.230942 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-05-31 16:18:24.230953 | orchestrator | Saturday 31 May 2025 16:16:32 +0000 (0:00:01.575) 0:00:22.594 ********** 2025-05-31 16:18:24.230963 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-31 16:18:24.230974 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-31 16:18:24.230985 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-31 16:18:24.230995 | orchestrator | 2025-05-31 16:18:24.231006 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-05-31 16:18:24.231016 | orchestrator | Saturday 31 May 2025 16:16:33 +0000 (0:00:01.507) 0:00:24.102 ********** 2025-05-31 16:18:24.231027 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-31 16:18:24.231038 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-31 16:18:24.231075 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-31 16:18:24.231087 | orchestrator | 2025-05-31 16:18:24.231098 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-31 16:18:24.231108 | orchestrator | Saturday 31 May 2025 16:16:35 +0000 (0:00:01.337) 0:00:25.439 ********** 2025-05-31 16:18:24.231119 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:18:24.231129 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:18:24.231140 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:18:24.231150 | orchestrator | 2025-05-31 16:18:24.231161 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-05-31 16:18:24.231172 | orchestrator | Saturday 31 May 2025 16:16:35 +0000 (0:00:00.476) 0:00:25.915 ********** 2025-05-31 16:18:24.231183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-31 16:18:24.231202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-31 16:18:24.231225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-31 16:18:24.231252 | orchestrator | 2025-05-31 16:18:24.231263 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-05-31 16:18:24.231295 | orchestrator | Saturday 31 May 2025 16:16:37 +0000 (0:00:01.600) 0:00:27.516 ********** 2025-05-31 16:18:24.231307 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:18:24.231317 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:18:24.231328 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:18:24.231339 | orchestrator | 2025-05-31 16:18:24.231350 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-05-31 16:18:24.231360 | orchestrator | Saturday 31 May 2025 16:16:38 +0000 (0:00:01.182) 0:00:28.698 ********** 2025-05-31 16:18:24.231371 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:18:24.231382 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:18:24.231393 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:18:24.231403 | orchestrator | 2025-05-31 16:18:24.231414 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-05-31 16:18:24.231425 | orchestrator | Saturday 31 May 2025 16:16:44 +0000 (0:00:06.479) 0:00:35.177 ********** 2025-05-31 16:18:24.231436 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:18:24.231447 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:18:24.231457 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:18:24.231468 | orchestrator | 2025-05-31 16:18:24.231479 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-31 16:18:24.231489 | orchestrator | 2025-05-31 16:18:24.231500 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-31 16:18:24.231511 | orchestrator | Saturday 31 May 2025 16:16:45 +0000 (0:00:00.360) 0:00:35.538 ********** 2025-05-31 16:18:24.231522 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:18:24.231532 | orchestrator | 2025-05-31 16:18:24.231543 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-31 16:18:24.231554 | orchestrator | Saturday 31 May 2025 16:16:45 +0000 (0:00:00.677) 0:00:36.215 ********** 2025-05-31 16:18:24.231564 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:18:24.231575 | orchestrator | 2025-05-31 16:18:24.231586 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-31 16:18:24.231596 | orchestrator | Saturday 31 May 2025 16:16:46 +0000 (0:00:00.604) 0:00:36.819 ********** 2025-05-31 16:18:24.231607 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:18:24.231618 | orchestrator | 2025-05-31 16:18:24.231628 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-31 16:18:24.231639 | orchestrator | Saturday 31 May 2025 16:16:48 +0000 (0:00:01.909) 0:00:38.729 ********** 2025-05-31 16:18:24.231649 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:18:24.231660 | orchestrator | 2025-05-31 16:18:24.231671 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-31 16:18:24.231681 | orchestrator | 2025-05-31 16:18:24.231692 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-31 16:18:24.231703 | orchestrator | Saturday 31 May 2025 16:17:43 +0000 (0:00:55.441) 0:01:34.171 ********** 2025-05-31 16:18:24.231713 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:18:24.231724 | orchestrator | 2025-05-31 16:18:24.231735 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-31 16:18:24.231746 | orchestrator | Saturday 31 May 2025 16:17:44 +0000 (0:00:00.725) 0:01:34.896 ********** 2025-05-31 16:18:24.231756 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:18:24.231767 | orchestrator | 2025-05-31 16:18:24.231778 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-31 16:18:24.231789 | orchestrator | Saturday 31 May 2025 16:17:44 +0000 (0:00:00.198) 0:01:35.095 ********** 2025-05-31 16:18:24.231799 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:18:24.231810 | orchestrator | 2025-05-31 16:18:24.231821 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-31 16:18:24.231832 | orchestrator | Saturday 31 May 2025 16:17:51 +0000 (0:00:06.847) 0:01:41.942 ********** 2025-05-31 16:18:24.231849 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:18:24.231860 | orchestrator | 2025-05-31 16:18:24.231871 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-31 16:18:24.231881 | orchestrator | 2025-05-31 16:18:24.231892 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-31 16:18:24.231903 | orchestrator | Saturday 31 May 2025 16:18:02 +0000 (0:00:11.357) 0:01:53.300 ********** 2025-05-31 16:18:24.231913 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:18:24.231924 | orchestrator | 2025-05-31 16:18:24.231935 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-31 16:18:24.231946 | orchestrator | Saturday 31 May 2025 16:18:03 +0000 (0:00:00.772) 0:01:54.072 ********** 2025-05-31 16:18:24.231956 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:18:24.231967 | orchestrator | 2025-05-31 16:18:24.231977 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-31 16:18:24.231994 | orchestrator | Saturday 31 May 2025 16:18:03 +0000 (0:00:00.294) 0:01:54.367 ********** 2025-05-31 16:18:24.232005 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:18:24.232016 | orchestrator | 2025-05-31 16:18:24.232026 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-31 16:18:24.232037 | orchestrator | Saturday 31 May 2025 16:18:05 +0000 (0:00:01.979) 0:01:56.347 ********** 2025-05-31 16:18:24.232047 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:18:24.232058 | orchestrator | 2025-05-31 16:18:24.232068 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-05-31 16:18:24.232079 | orchestrator | 2025-05-31 16:18:24.232089 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-05-31 16:18:24.232100 | orchestrator | Saturday 31 May 2025 16:18:20 +0000 (0:00:14.611) 0:02:10.959 ********** 2025-05-31 16:18:24.232111 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:18:24.232121 | orchestrator | 2025-05-31 16:18:24.232132 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-05-31 16:18:24.234643 | orchestrator | Saturday 31 May 2025 16:18:21 +0000 (0:00:00.535) 0:02:11.494 ********** 2025-05-31 16:18:24.234695 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-31 16:18:24.234706 | orchestrator | enable_outward_rabbitmq_True 2025-05-31 16:18:24.234717 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-31 16:18:24.234728 | orchestrator | outward_rabbitmq_restart 2025-05-31 16:18:24.234739 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:18:24.234750 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:18:24.234761 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:18:24.234771 | orchestrator | 2025-05-31 16:18:24.234782 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-05-31 16:18:24.234792 | orchestrator | skipping: no hosts matched 2025-05-31 16:18:24.234803 | orchestrator | 2025-05-31 16:18:24.234813 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-05-31 16:18:24.234824 | orchestrator | skipping: no hosts matched 2025-05-31 16:18:24.234835 | orchestrator | 2025-05-31 16:18:24.234845 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-05-31 16:18:24.234855 | orchestrator | skipping: no hosts matched 2025-05-31 16:18:24.234866 | orchestrator | 2025-05-31 16:18:24.234877 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:18:24.234888 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-05-31 16:18:24.234900 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-31 16:18:24.234911 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 16:18:24.234922 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 16:18:24.234945 | orchestrator | 2025-05-31 16:18:24.234956 | orchestrator | 2025-05-31 16:18:24.234967 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 16:18:24.234978 | orchestrator | Saturday 31 May 2025 16:18:23 +0000 (0:00:02.407) 0:02:13.901 ********** 2025-05-31 16:18:24.234988 | orchestrator | =============================================================================== 2025-05-31 16:18:24.234999 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 81.41s 2025-05-31 16:18:24.235009 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.74s 2025-05-31 16:18:24.235020 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.48s 2025-05-31 16:18:24.235030 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.06s 2025-05-31 16:18:24.235041 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.41s 2025-05-31 16:18:24.235051 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.40s 2025-05-31 16:18:24.235062 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.22s 2025-05-31 16:18:24.235072 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.17s 2025-05-31 16:18:24.235083 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.86s 2025-05-31 16:18:24.235094 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.77s 2025-05-31 16:18:24.235104 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.60s 2025-05-31 16:18:24.235115 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.58s 2025-05-31 16:18:24.235125 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.51s 2025-05-31 16:18:24.235136 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.45s 2025-05-31 16:18:24.235146 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.34s 2025-05-31 16:18:24.235157 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.28s 2025-05-31 16:18:24.235167 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.18s 2025-05-31 16:18:24.235178 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.15s 2025-05-31 16:18:24.235188 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.10s 2025-05-31 16:18:24.235199 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.96s 2025-05-31 16:18:24.235220 | orchestrator | 2025-05-31 16:18:24 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:18:24.235232 | orchestrator | 2025-05-31 16:18:24 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:18:24.235674 | orchestrator | 2025-05-31 16:18:24 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:18:24.235775 | orchestrator | 2025-05-31 16:18:24 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:18:27.285033 | orchestrator | 2025-05-31 16:18:27 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:18:27.285140 | orchestrator | 2025-05-31 16:18:27 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:18:27.286211 | orchestrator | 2025-05-31 16:18:27 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:18:27.287958 | orchestrator | 2025-05-31 16:18:27 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:18:27.287978 | orchestrator | 2025-05-31 16:18:27 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:18:30.353666 | orchestrator | 2025-05-31 16:18:30 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:18:30.356798 | orchestrator | 2025-05-31 16:18:30 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:18:30.359517 | orchestrator | 2025-05-31 16:18:30 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:18:30.361383 | orchestrator | 2025-05-31 16:18:30 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:18:30.361405 | orchestrator | 2025-05-31 16:18:30 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:18:33.417501 | orchestrator | 2025-05-31 16:18:33 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:18:33.419238 | orchestrator | 2025-05-31 16:18:33 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:18:33.420862 | orchestrator | 2025-05-31 16:18:33 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:18:33.424533 | orchestrator | 2025-05-31 16:18:33 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:18:33.424556 | orchestrator | 2025-05-31 16:18:33 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:18:36.470725 | orchestrator | 2025-05-31 16:18:36 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:18:36.472108 | orchestrator | 2025-05-31 16:18:36 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:18:36.473616 | orchestrator | 2025-05-31 16:18:36 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:18:36.475105 | orchestrator | 2025-05-31 16:18:36 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:18:36.475373 | orchestrator | 2025-05-31 16:18:36 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:18:39.530108 | orchestrator | 2025-05-31 16:18:39 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:18:39.531211 | orchestrator | 2025-05-31 16:18:39 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:18:39.532142 | orchestrator | 2025-05-31 16:18:39 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:18:39.535752 | orchestrator | 2025-05-31 16:18:39 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:18:39.535864 | orchestrator | 2025-05-31 16:18:39 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:18:42.575119 | orchestrator | 2025-05-31 16:18:42 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:18:42.576803 | orchestrator | 2025-05-31 16:18:42 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:18:42.576834 | orchestrator | 2025-05-31 16:18:42 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:18:42.579940 | orchestrator | 2025-05-31 16:18:42 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:18:42.579968 | orchestrator | 2025-05-31 16:18:42 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:18:45.632287 | orchestrator | 2025-05-31 16:18:45 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:18:45.632400 | orchestrator | 2025-05-31 16:18:45 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:18:45.632411 | orchestrator | 2025-05-31 16:18:45 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:18:45.635516 | orchestrator | 2025-05-31 16:18:45 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:18:45.635558 | orchestrator | 2025-05-31 16:18:45 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:18:48.677621 | orchestrator | 2025-05-31 16:18:48 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:18:48.679558 | orchestrator | 2025-05-31 16:18:48 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:18:48.682731 | orchestrator | 2025-05-31 16:18:48 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:18:48.685101 | orchestrator | 2025-05-31 16:18:48 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:18:48.685579 | orchestrator | 2025-05-31 16:18:48 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:18:51.725851 | orchestrator | 2025-05-31 16:18:51 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:18:51.726118 | orchestrator | 2025-05-31 16:18:51 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:18:51.726708 | orchestrator | 2025-05-31 16:18:51 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:18:51.727354 | orchestrator | 2025-05-31 16:18:51 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:18:51.727624 | orchestrator | 2025-05-31 16:18:51 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:18:54.761386 | orchestrator | 2025-05-31 16:18:54 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:18:54.761633 | orchestrator | 2025-05-31 16:18:54 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:18:54.762153 | orchestrator | 2025-05-31 16:18:54 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:18:54.765119 | orchestrator | 2025-05-31 16:18:54 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:18:54.765157 | orchestrator | 2025-05-31 16:18:54 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:18:57.811132 | orchestrator | 2025-05-31 16:18:57 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:18:57.811611 | orchestrator | 2025-05-31 16:18:57 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:18:57.812846 | orchestrator | 2025-05-31 16:18:57 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:18:57.813778 | orchestrator | 2025-05-31 16:18:57 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:18:57.814385 | orchestrator | 2025-05-31 16:18:57 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:19:00.860081 | orchestrator | 2025-05-31 16:19:00 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:19:00.860162 | orchestrator | 2025-05-31 16:19:00 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:19:00.860176 | orchestrator | 2025-05-31 16:19:00 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:19:00.860515 | orchestrator | 2025-05-31 16:19:00 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:19:00.860538 | orchestrator | 2025-05-31 16:19:00 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:19:03.890188 | orchestrator | 2025-05-31 16:19:03 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:19:03.890760 | orchestrator | 2025-05-31 16:19:03 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:19:03.890837 | orchestrator | 2025-05-31 16:19:03 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:19:03.892861 | orchestrator | 2025-05-31 16:19:03 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:19:03.892885 | orchestrator | 2025-05-31 16:19:03 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:19:06.947870 | orchestrator | 2025-05-31 16:19:06 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:19:06.948081 | orchestrator | 2025-05-31 16:19:06 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:19:06.948946 | orchestrator | 2025-05-31 16:19:06 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:19:06.951460 | orchestrator | 2025-05-31 16:19:06 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:19:06.951561 | orchestrator | 2025-05-31 16:19:06 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:19:09.998969 | orchestrator | 2025-05-31 16:19:09 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:19:09.999521 | orchestrator | 2025-05-31 16:19:09 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:19:10.000535 | orchestrator | 2025-05-31 16:19:09 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:19:10.003736 | orchestrator | 2025-05-31 16:19:09 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:19:10.003785 | orchestrator | 2025-05-31 16:19:10 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:19:13.060570 | orchestrator | 2025-05-31 16:19:13 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:19:13.061211 | orchestrator | 2025-05-31 16:19:13 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:19:13.061517 | orchestrator | 2025-05-31 16:19:13 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:19:13.065370 | orchestrator | 2025-05-31 16:19:13 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:19:13.065400 | orchestrator | 2025-05-31 16:19:13 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:19:16.108068 | orchestrator | 2025-05-31 16:19:16 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:19:16.108172 | orchestrator | 2025-05-31 16:19:16 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:19:16.108361 | orchestrator | 2025-05-31 16:19:16 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:19:16.109507 | orchestrator | 2025-05-31 16:19:16 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:19:16.109530 | orchestrator | 2025-05-31 16:19:16 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:19:19.148822 | orchestrator | 2025-05-31 16:19:19 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:19:19.150665 | orchestrator | 2025-05-31 16:19:19 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:19:19.152657 | orchestrator | 2025-05-31 16:19:19 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state STARTED 2025-05-31 16:19:19.154533 | orchestrator | 2025-05-31 16:19:19 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:19:19.154559 | orchestrator | 2025-05-31 16:19:19 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:19:22.193739 | orchestrator | 2025-05-31 16:19:22 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:19:22.194332 | orchestrator | 2025-05-31 16:19:22 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:19:22.195419 | orchestrator | 2025-05-31 16:19:22 | INFO  | Task c0c8a70e-5a80-422a-bdf4-65c9200c6126 is in state SUCCESS 2025-05-31 16:19:22.197091 | orchestrator | 2025-05-31 16:19:22.197119 | orchestrator | 2025-05-31 16:19:22.197131 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-31 16:19:22.197142 | orchestrator | 2025-05-31 16:19:22.197153 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-31 16:19:22.197164 | orchestrator | Saturday 31 May 2025 16:17:02 +0000 (0:00:00.145) 0:00:00.145 ********** 2025-05-31 16:19:22.197175 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:19:22.197186 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:19:22.197196 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:19:22.197207 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:19:22.197240 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:19:22.197251 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:19:22.197262 | orchestrator | 2025-05-31 16:19:22.197273 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-31 16:19:22.197324 | orchestrator | Saturday 31 May 2025 16:17:03 +0000 (0:00:00.460) 0:00:00.606 ********** 2025-05-31 16:19:22.197336 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-05-31 16:19:22.197347 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-05-31 16:19:22.197358 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-05-31 16:19:22.197369 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-05-31 16:19:22.197380 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-05-31 16:19:22.197391 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-05-31 16:19:22.197401 | orchestrator | 2025-05-31 16:19:22.197412 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-05-31 16:19:22.197423 | orchestrator | 2025-05-31 16:19:22.197434 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-05-31 16:19:22.197542 | orchestrator | Saturday 31 May 2025 16:17:04 +0000 (0:00:01.270) 0:00:01.876 ********** 2025-05-31 16:19:22.197569 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:19:22.197581 | orchestrator | 2025-05-31 16:19:22.197592 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-05-31 16:19:22.197602 | orchestrator | Saturday 31 May 2025 16:17:05 +0000 (0:00:00.962) 0:00:02.839 ********** 2025-05-31 16:19:22.197615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.197629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.197640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.197665 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.197676 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.197699 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.197712 | orchestrator | 2025-05-31 16:19:22.197725 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-05-31 16:19:22.197737 | orchestrator | Saturday 31 May 2025 16:17:06 +0000 (0:00:01.013) 0:00:03.852 ********** 2025-05-31 16:19:22.197750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.197763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.197782 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.197795 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.197808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.197826 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.197839 | orchestrator | 2025-05-31 16:19:22.197851 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-05-31 16:19:22.197892 | orchestrator | Saturday 31 May 2025 16:17:08 +0000 (0:00:01.747) 0:00:05.600 ********** 2025-05-31 16:19:22.197906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.197919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.197945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.197958 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.197971 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.197989 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.198002 | orchestrator | 2025-05-31 16:19:22.198058 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-05-31 16:19:22.198073 | orchestrator | Saturday 31 May 2025 16:17:09 +0000 (0:00:01.601) 0:00:07.201 ********** 2025-05-31 16:19:22.198084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.198103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.198115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.198126 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.198137 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.198156 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.198168 | orchestrator | 2025-05-31 16:19:22.198179 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-05-31 16:19:22.198190 | orchestrator | Saturday 31 May 2025 16:17:11 +0000 (0:00:01.538) 0:00:08.740 ********** 2025-05-31 16:19:22.198201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.198272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.198287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.198306 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.198317 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.198329 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.198340 | orchestrator | 2025-05-31 16:19:22.198351 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-05-31 16:19:22.198362 | orchestrator | Saturday 31 May 2025 16:17:12 +0000 (0:00:01.524) 0:00:10.264 ********** 2025-05-31 16:19:22.198373 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:19:22.198384 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:19:22.198395 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:19:22.198405 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:19:22.198416 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:19:22.198426 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:19:22.198437 | orchestrator | 2025-05-31 16:19:22.198448 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-05-31 16:19:22.198458 | orchestrator | Saturday 31 May 2025 16:17:15 +0000 (0:00:02.752) 0:00:13.017 ********** 2025-05-31 16:19:22.198469 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-05-31 16:19:22.198480 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-05-31 16:19:22.198490 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-05-31 16:19:22.198506 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-05-31 16:19:22.198517 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-05-31 16:19:22.198528 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-05-31 16:19:22.198539 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-31 16:19:22.198549 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-31 16:19:22.198560 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-31 16:19:22.198570 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-31 16:19:22.198581 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-31 16:19:22.198591 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-31 16:19:22.198602 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-31 16:19:22.198619 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-31 16:19:22.198630 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-31 16:19:22.198657 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-31 16:19:22.198670 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-31 16:19:22.198681 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-31 16:19:22.198692 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-31 16:19:22.198703 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-31 16:19:22.198714 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-31 16:19:22.198724 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-31 16:19:22.198735 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-31 16:19:22.198745 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-31 16:19:22.198756 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-31 16:19:22.198766 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-31 16:19:22.198777 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-31 16:19:22.198787 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-31 16:19:22.198798 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-31 16:19:22.198831 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-31 16:19:22.198843 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-31 16:19:22.198853 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-31 16:19:22.198864 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-31 16:19:22.198875 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-31 16:19:22.198886 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-31 16:19:22.198896 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-31 16:19:22.198907 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-31 16:19:22.198917 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-31 16:19:22.198928 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-31 16:19:22.198939 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-31 16:19:22.198956 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-31 16:19:22.198967 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-31 16:19:22.198986 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-05-31 16:19:22.198997 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-05-31 16:19:22.199008 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-05-31 16:19:22.199018 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-05-31 16:19:22.199029 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-05-31 16:19:22.199040 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-05-31 16:19:22.199050 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-31 16:19:22.199061 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-31 16:19:22.199077 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-31 16:19:22.199088 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-31 16:19:22.199099 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-31 16:19:22.199109 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-31 16:19:22.199120 | orchestrator | 2025-05-31 16:19:22.199131 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-31 16:19:22.199141 | orchestrator | Saturday 31 May 2025 16:17:35 +0000 (0:00:20.114) 0:00:33.131 ********** 2025-05-31 16:19:22.199152 | orchestrator | 2025-05-31 16:19:22.199162 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-31 16:19:22.199173 | orchestrator | Saturday 31 May 2025 16:17:35 +0000 (0:00:00.049) 0:00:33.181 ********** 2025-05-31 16:19:22.199184 | orchestrator | 2025-05-31 16:19:22.199194 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-31 16:19:22.199205 | orchestrator | Saturday 31 May 2025 16:17:35 +0000 (0:00:00.139) 0:00:33.321 ********** 2025-05-31 16:19:22.199244 | orchestrator | 2025-05-31 16:19:22.199256 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-31 16:19:22.199266 | orchestrator | Saturday 31 May 2025 16:17:36 +0000 (0:00:00.054) 0:00:33.375 ********** 2025-05-31 16:19:22.199277 | orchestrator | 2025-05-31 16:19:22.199288 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-31 16:19:22.199298 | orchestrator | Saturday 31 May 2025 16:17:36 +0000 (0:00:00.047) 0:00:33.422 ********** 2025-05-31 16:19:22.199309 | orchestrator | 2025-05-31 16:19:22.199320 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-31 16:19:22.199331 | orchestrator | Saturday 31 May 2025 16:17:36 +0000 (0:00:00.047) 0:00:33.470 ********** 2025-05-31 16:19:22.199341 | orchestrator | 2025-05-31 16:19:22.199352 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-05-31 16:19:22.199363 | orchestrator | Saturday 31 May 2025 16:17:36 +0000 (0:00:00.048) 0:00:33.519 ********** 2025-05-31 16:19:22.199373 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:19:22.199384 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:19:22.199395 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:19:22.199405 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:19:22.199416 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:19:22.199426 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:19:22.199449 | orchestrator | 2025-05-31 16:19:22.199460 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-05-31 16:19:22.199471 | orchestrator | Saturday 31 May 2025 16:17:38 +0000 (0:00:02.052) 0:00:35.571 ********** 2025-05-31 16:19:22.199482 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:19:22.199493 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:19:22.199503 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:19:22.199514 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:19:22.199524 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:19:22.199535 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:19:22.199546 | orchestrator | 2025-05-31 16:19:22.199556 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-05-31 16:19:22.199567 | orchestrator | 2025-05-31 16:19:22.199578 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-31 16:19:22.199589 | orchestrator | Saturday 31 May 2025 16:17:58 +0000 (0:00:20.417) 0:00:55.988 ********** 2025-05-31 16:19:22.199599 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:19:22.199610 | orchestrator | 2025-05-31 16:19:22.199621 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-31 16:19:22.199631 | orchestrator | Saturday 31 May 2025 16:17:59 +0000 (0:00:00.502) 0:00:56.491 ********** 2025-05-31 16:19:22.199642 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:19:22.199653 | orchestrator | 2025-05-31 16:19:22.199670 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-05-31 16:19:22.199681 | orchestrator | Saturday 31 May 2025 16:17:59 +0000 (0:00:00.575) 0:00:57.067 ********** 2025-05-31 16:19:22.199692 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:19:22.199703 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:19:22.199713 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:19:22.199723 | orchestrator | 2025-05-31 16:19:22.199734 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-05-31 16:19:22.199745 | orchestrator | Saturday 31 May 2025 16:18:00 +0000 (0:00:00.754) 0:00:57.822 ********** 2025-05-31 16:19:22.199755 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:19:22.199766 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:19:22.199776 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:19:22.199787 | orchestrator | 2025-05-31 16:19:22.199798 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-05-31 16:19:22.199809 | orchestrator | Saturday 31 May 2025 16:18:00 +0000 (0:00:00.243) 0:00:58.065 ********** 2025-05-31 16:19:22.199819 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:19:22.199830 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:19:22.199840 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:19:22.199851 | orchestrator | 2025-05-31 16:19:22.199861 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-05-31 16:19:22.199872 | orchestrator | Saturday 31 May 2025 16:18:01 +0000 (0:00:00.358) 0:00:58.424 ********** 2025-05-31 16:19:22.199883 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:19:22.199893 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:19:22.199904 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:19:22.199914 | orchestrator | 2025-05-31 16:19:22.199925 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-05-31 16:19:22.199935 | orchestrator | Saturday 31 May 2025 16:18:01 +0000 (0:00:00.326) 0:00:58.751 ********** 2025-05-31 16:19:22.199946 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:19:22.199956 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:19:22.199967 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:19:22.199977 | orchestrator | 2025-05-31 16:19:22.199993 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-05-31 16:19:22.200004 | orchestrator | Saturday 31 May 2025 16:18:01 +0000 (0:00:00.243) 0:00:58.994 ********** 2025-05-31 16:19:22.200014 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:19:22.200025 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:19:22.200041 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:19:22.200052 | orchestrator | 2025-05-31 16:19:22.200063 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-05-31 16:19:22.200073 | orchestrator | Saturday 31 May 2025 16:18:02 +0000 (0:00:00.388) 0:00:59.382 ********** 2025-05-31 16:19:22.200084 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:19:22.200094 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:19:22.200105 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:19:22.200115 | orchestrator | 2025-05-31 16:19:22.200126 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-05-31 16:19:22.200137 | orchestrator | Saturday 31 May 2025 16:18:02 +0000 (0:00:00.565) 0:00:59.948 ********** 2025-05-31 16:19:22.200148 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:19:22.200158 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:19:22.200169 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:19:22.200179 | orchestrator | 2025-05-31 16:19:22.200190 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-05-31 16:19:22.200200 | orchestrator | Saturday 31 May 2025 16:18:03 +0000 (0:00:00.747) 0:01:00.695 ********** 2025-05-31 16:19:22.200211 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:19:22.200260 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:19:22.200272 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:19:22.200283 | orchestrator | 2025-05-31 16:19:22.200294 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-05-31 16:19:22.200305 | orchestrator | Saturday 31 May 2025 16:18:03 +0000 (0:00:00.594) 0:01:01.290 ********** 2025-05-31 16:19:22.200315 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:19:22.200326 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:19:22.200337 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:19:22.200347 | orchestrator | 2025-05-31 16:19:22.200358 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-05-31 16:19:22.200368 | orchestrator | Saturday 31 May 2025 16:18:04 +0000 (0:00:00.666) 0:01:01.957 ********** 2025-05-31 16:19:22.200379 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:19:22.200390 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:19:22.200400 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:19:22.200411 | orchestrator | 2025-05-31 16:19:22.200422 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-05-31 16:19:22.200432 | orchestrator | Saturday 31 May 2025 16:18:05 +0000 (0:00:00.407) 0:01:02.365 ********** 2025-05-31 16:19:22.200443 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:19:22.200454 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:19:22.200464 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:19:22.200475 | orchestrator | 2025-05-31 16:19:22.200486 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-05-31 16:19:22.200496 | orchestrator | Saturday 31 May 2025 16:18:05 +0000 (0:00:00.378) 0:01:02.743 ********** 2025-05-31 16:19:22.200507 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:19:22.200517 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:19:22.200528 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:19:22.200538 | orchestrator | 2025-05-31 16:19:22.200549 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-05-31 16:19:22.200560 | orchestrator | Saturday 31 May 2025 16:18:05 +0000 (0:00:00.240) 0:01:02.983 ********** 2025-05-31 16:19:22.200570 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:19:22.200581 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:19:22.200591 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:19:22.200602 | orchestrator | 2025-05-31 16:19:22.200613 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-05-31 16:19:22.200624 | orchestrator | Saturday 31 May 2025 16:18:06 +0000 (0:00:00.344) 0:01:03.328 ********** 2025-05-31 16:19:22.200634 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:19:22.200645 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:19:22.200663 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:19:22.200673 | orchestrator | 2025-05-31 16:19:22.200690 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-05-31 16:19:22.200702 | orchestrator | Saturday 31 May 2025 16:18:06 +0000 (0:00:00.348) 0:01:03.676 ********** 2025-05-31 16:19:22.200712 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:19:22.200723 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:19:22.200733 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:19:22.200744 | orchestrator | 2025-05-31 16:19:22.200755 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-05-31 16:19:22.200765 | orchestrator | Saturday 31 May 2025 16:18:06 +0000 (0:00:00.225) 0:01:03.902 ********** 2025-05-31 16:19:22.200776 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:19:22.200786 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:19:22.200797 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:19:22.200807 | orchestrator | 2025-05-31 16:19:22.200818 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-31 16:19:22.200828 | orchestrator | Saturday 31 May 2025 16:18:06 +0000 (0:00:00.302) 0:01:04.204 ********** 2025-05-31 16:19:22.200839 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:19:22.200849 | orchestrator | 2025-05-31 16:19:22.200860 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-05-31 16:19:22.200870 | orchestrator | Saturday 31 May 2025 16:18:07 +0000 (0:00:00.618) 0:01:04.823 ********** 2025-05-31 16:19:22.200881 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:19:22.200891 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:19:22.200902 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:19:22.200912 | orchestrator | 2025-05-31 16:19:22.200923 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-05-31 16:19:22.200934 | orchestrator | Saturday 31 May 2025 16:18:07 +0000 (0:00:00.343) 0:01:05.167 ********** 2025-05-31 16:19:22.200944 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:19:22.200955 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:19:22.200965 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:19:22.200975 | orchestrator | 2025-05-31 16:19:22.200991 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-05-31 16:19:22.201002 | orchestrator | Saturday 31 May 2025 16:18:08 +0000 (0:00:00.439) 0:01:05.606 ********** 2025-05-31 16:19:22.201012 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:19:22.201023 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:19:22.201033 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:19:22.201044 | orchestrator | 2025-05-31 16:19:22.201055 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-05-31 16:19:22.201065 | orchestrator | Saturday 31 May 2025 16:18:08 +0000 (0:00:00.351) 0:01:05.957 ********** 2025-05-31 16:19:22.201076 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:19:22.201086 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:19:22.201097 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:19:22.201107 | orchestrator | 2025-05-31 16:19:22.201118 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-05-31 16:19:22.201129 | orchestrator | Saturday 31 May 2025 16:18:08 +0000 (0:00:00.355) 0:01:06.313 ********** 2025-05-31 16:19:22.201139 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:19:22.201150 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:19:22.201160 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:19:22.201171 | orchestrator | 2025-05-31 16:19:22.201181 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-05-31 16:19:22.201192 | orchestrator | Saturday 31 May 2025 16:18:09 +0000 (0:00:00.286) 0:01:06.599 ********** 2025-05-31 16:19:22.201203 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:19:22.201227 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:19:22.201239 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:19:22.201249 | orchestrator | 2025-05-31 16:19:22.201281 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-05-31 16:19:22.201293 | orchestrator | Saturday 31 May 2025 16:18:09 +0000 (0:00:00.344) 0:01:06.944 ********** 2025-05-31 16:19:22.201304 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:19:22.201315 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:19:22.201325 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:19:22.201336 | orchestrator | 2025-05-31 16:19:22.201347 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-05-31 16:19:22.201357 | orchestrator | Saturday 31 May 2025 16:18:09 +0000 (0:00:00.369) 0:01:07.314 ********** 2025-05-31 16:19:22.201368 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:19:22.201379 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:19:22.201389 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:19:22.201400 | orchestrator | 2025-05-31 16:19:22.201410 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-31 16:19:22.201421 | orchestrator | Saturday 31 May 2025 16:18:10 +0000 (0:00:00.368) 0:01:07.682 ********** 2025-05-31 16:19:22.201432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.201446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.201463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.201476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.201489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.201505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.201517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.201533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.201545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.201556 | orchestrator | 2025-05-31 16:19:22.201567 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-31 16:19:22.201578 | orchestrator | Saturday 31 May 2025 16:18:11 +0000 (0:00:01.429) 0:01:09.112 ********** 2025-05-31 16:19:22.201589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.201601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.201612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.201629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.201641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.201652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.201663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.201680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.201692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.201702 | orchestrator | 2025-05-31 16:19:22.201779 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-31 16:19:22.201799 | orchestrator | Saturday 31 May 2025 16:18:16 +0000 (0:00:04.254) 0:01:13.367 ********** 2025-05-31 16:19:22.201810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.201821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.201832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.201851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.201863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.201875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.201890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.201908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.201919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.201930 | orchestrator | 2025-05-31 16:19:22.201941 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-31 16:19:22.201952 | orchestrator | Saturday 31 May 2025 16:18:18 +0000 (0:00:02.540) 0:01:15.907 ********** 2025-05-31 16:19:22.201963 | orchestrator | 2025-05-31 16:19:22.201973 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-31 16:19:22.201984 | orchestrator | Saturday 31 May 2025 16:18:18 +0000 (0:00:00.055) 0:01:15.963 ********** 2025-05-31 16:19:22.201995 | orchestrator | 2025-05-31 16:19:22.202006 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-31 16:19:22.202070 | orchestrator | Saturday 31 May 2025 16:18:18 +0000 (0:00:00.054) 0:01:16.018 ********** 2025-05-31 16:19:22.202084 | orchestrator | 2025-05-31 16:19:22.202095 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-31 16:19:22.202106 | orchestrator | Saturday 31 May 2025 16:18:18 +0000 (0:00:00.053) 0:01:16.071 ********** 2025-05-31 16:19:22.202117 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:19:22.202128 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:19:22.202138 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:19:22.202149 | orchestrator | 2025-05-31 16:19:22.202160 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-31 16:19:22.202170 | orchestrator | Saturday 31 May 2025 16:18:26 +0000 (0:00:07.813) 0:01:23.885 ********** 2025-05-31 16:19:22.202181 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:19:22.202192 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:19:22.202202 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:19:22.202231 | orchestrator | 2025-05-31 16:19:22.202243 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-31 16:19:22.202253 | orchestrator | Saturday 31 May 2025 16:18:34 +0000 (0:00:07.802) 0:01:31.687 ********** 2025-05-31 16:19:22.202264 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:19:22.202275 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:19:22.202286 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:19:22.202297 | orchestrator | 2025-05-31 16:19:22.202307 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-31 16:19:22.202318 | orchestrator | Saturday 31 May 2025 16:18:42 +0000 (0:00:07.659) 0:01:39.346 ********** 2025-05-31 16:19:22.202329 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:19:22.202340 | orchestrator | 2025-05-31 16:19:22.202351 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-31 16:19:22.202362 | orchestrator | Saturday 31 May 2025 16:18:42 +0000 (0:00:00.111) 0:01:39.457 ********** 2025-05-31 16:19:22.202373 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:19:22.202383 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:19:22.202394 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:19:22.202405 | orchestrator | 2025-05-31 16:19:22.202423 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-31 16:19:22.202441 | orchestrator | Saturday 31 May 2025 16:18:43 +0000 (0:00:01.141) 0:01:40.599 ********** 2025-05-31 16:19:22.202452 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:19:22.202463 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:19:22.202473 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:19:22.202484 | orchestrator | 2025-05-31 16:19:22.202495 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-31 16:19:22.202506 | orchestrator | Saturday 31 May 2025 16:18:43 +0000 (0:00:00.611) 0:01:41.210 ********** 2025-05-31 16:19:22.202517 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:19:22.202527 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:19:22.202538 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:19:22.202549 | orchestrator | 2025-05-31 16:19:22.202559 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-31 16:19:22.202570 | orchestrator | Saturday 31 May 2025 16:18:44 +0000 (0:00:01.031) 0:01:42.242 ********** 2025-05-31 16:19:22.202581 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:19:22.202592 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:19:22.202603 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:19:22.202613 | orchestrator | 2025-05-31 16:19:22.202624 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-31 16:19:22.202635 | orchestrator | Saturday 31 May 2025 16:18:45 +0000 (0:00:00.604) 0:01:42.847 ********** 2025-05-31 16:19:22.202645 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:19:22.202656 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:19:22.202667 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:19:22.202677 | orchestrator | 2025-05-31 16:19:22.202688 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-31 16:19:22.202699 | orchestrator | Saturday 31 May 2025 16:18:46 +0000 (0:00:01.147) 0:01:43.995 ********** 2025-05-31 16:19:22.202710 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:19:22.202721 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:19:22.202732 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:19:22.202742 | orchestrator | 2025-05-31 16:19:22.202753 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-05-31 16:19:22.202769 | orchestrator | Saturday 31 May 2025 16:18:47 +0000 (0:00:00.753) 0:01:44.748 ********** 2025-05-31 16:19:22.202779 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:19:22.202790 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:19:22.202801 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:19:22.202811 | orchestrator | 2025-05-31 16:19:22.202822 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-31 16:19:22.202833 | orchestrator | Saturday 31 May 2025 16:18:47 +0000 (0:00:00.536) 0:01:45.285 ********** 2025-05-31 16:19:22.202844 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.202856 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.202868 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.202879 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.202896 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.202907 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.202924 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.202935 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.202947 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.202958 | orchestrator | 2025-05-31 16:19:22.202969 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-31 16:19:22.202984 | orchestrator | Saturday 31 May 2025 16:18:49 +0000 (0:00:01.685) 0:01:46.971 ********** 2025-05-31 16:19:22.202996 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.203007 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.203018 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.203030 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.203048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.203059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.203076 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.203088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.203099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.203110 | orchestrator | 2025-05-31 16:19:22.203121 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-31 16:19:22.203132 | orchestrator | Saturday 31 May 2025 16:18:54 +0000 (0:00:04.619) 0:01:51.591 ********** 2025-05-31 16:19:22.203147 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.203159 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.203170 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.203187 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.203199 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.203210 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.203268 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.203287 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.203299 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:19:22.203310 | orchestrator | 2025-05-31 16:19:22.203321 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-31 16:19:22.203332 | orchestrator | Saturday 31 May 2025 16:18:57 +0000 (0:00:02.774) 0:01:54.365 ********** 2025-05-31 16:19:22.203343 | orchestrator | 2025-05-31 16:19:22.203353 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-31 16:19:22.203364 | orchestrator | Saturday 31 May 2025 16:18:57 +0000 (0:00:00.051) 0:01:54.416 ********** 2025-05-31 16:19:22.203375 | orchestrator | 2025-05-31 16:19:22.203386 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-31 16:19:22.203397 | orchestrator | Saturday 31 May 2025 16:18:57 +0000 (0:00:00.130) 0:01:54.547 ********** 2025-05-31 16:19:22.203407 | orchestrator | 2025-05-31 16:19:22.203418 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-31 16:19:22.203429 | orchestrator | Saturday 31 May 2025 16:18:57 +0000 (0:00:00.048) 0:01:54.595 ********** 2025-05-31 16:19:22.203439 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:19:22.203455 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:19:22.203466 | orchestrator | 2025-05-31 16:19:22.203477 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-31 16:19:22.203487 | orchestrator | Saturday 31 May 2025 16:19:03 +0000 (0:00:06.122) 0:02:00.718 ********** 2025-05-31 16:19:22.203504 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:19:22.203515 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:19:22.203526 | orchestrator | 2025-05-31 16:19:22.203537 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-31 16:19:22.203547 | orchestrator | Saturday 31 May 2025 16:19:09 +0000 (0:00:06.257) 0:02:06.976 ********** 2025-05-31 16:19:22.203558 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:19:22.203569 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:19:22.203580 | orchestrator | 2025-05-31 16:19:22.203590 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-31 16:19:22.203601 | orchestrator | Saturday 31 May 2025 16:19:15 +0000 (0:00:06.281) 0:02:13.258 ********** 2025-05-31 16:19:22.203612 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:19:22.203622 | orchestrator | 2025-05-31 16:19:22.203633 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-31 16:19:22.203643 | orchestrator | Saturday 31 May 2025 16:19:16 +0000 (0:00:00.288) 0:02:13.547 ********** 2025-05-31 16:19:22.203654 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:19:22.203664 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:19:22.203673 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:19:22.203683 | orchestrator | 2025-05-31 16:19:22.203692 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-31 16:19:22.203702 | orchestrator | Saturday 31 May 2025 16:19:17 +0000 (0:00:00.844) 0:02:14.392 ********** 2025-05-31 16:19:22.203711 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:19:22.203721 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:19:22.203730 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:19:22.203740 | orchestrator | 2025-05-31 16:19:22.203750 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-31 16:19:22.203759 | orchestrator | Saturday 31 May 2025 16:19:17 +0000 (0:00:00.633) 0:02:15.025 ********** 2025-05-31 16:19:22.203769 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:19:22.203778 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:19:22.203787 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:19:22.203797 | orchestrator | 2025-05-31 16:19:22.203806 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-31 16:19:22.203816 | orchestrator | Saturday 31 May 2025 16:19:18 +0000 (0:00:01.015) 0:02:16.041 ********** 2025-05-31 16:19:22.203825 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:19:22.203835 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:19:22.203844 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:19:22.203854 | orchestrator | 2025-05-31 16:19:22.203863 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-31 16:19:22.203873 | orchestrator | Saturday 31 May 2025 16:19:19 +0000 (0:00:00.761) 0:02:16.802 ********** 2025-05-31 16:19:22.203883 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:19:22.203892 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:19:22.203901 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:19:22.203911 | orchestrator | 2025-05-31 16:19:22.203920 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-31 16:19:22.203930 | orchestrator | Saturday 31 May 2025 16:19:20 +0000 (0:00:00.813) 0:02:17.616 ********** 2025-05-31 16:19:22.203939 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:19:22.203949 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:19:22.203958 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:19:22.203967 | orchestrator | 2025-05-31 16:19:22.203977 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:19:22.203987 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-05-31 16:19:22.203997 | orchestrator | testbed-node-1 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-31 16:19:22.204011 | orchestrator | testbed-node-2 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-31 16:19:22.204027 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:19:22.204037 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:19:22.204046 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:19:22.204056 | orchestrator | 2025-05-31 16:19:22.204065 | orchestrator | 2025-05-31 16:19:22.204075 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 16:19:22.204085 | orchestrator | Saturday 31 May 2025 16:19:21 +0000 (0:00:00.967) 0:02:18.583 ********** 2025-05-31 16:19:22.204094 | orchestrator | =============================================================================== 2025-05-31 16:19:22.204104 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 20.42s 2025-05-31 16:19:22.204114 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.11s 2025-05-31 16:19:22.204123 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 14.06s 2025-05-31 16:19:22.204133 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.94s 2025-05-31 16:19:22.204142 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.94s 2025-05-31 16:19:22.204152 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.62s 2025-05-31 16:19:22.204165 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.25s 2025-05-31 16:19:22.204174 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.77s 2025-05-31 16:19:22.204184 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.75s 2025-05-31 16:19:22.204194 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.54s 2025-05-31 16:19:22.204203 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.05s 2025-05-31 16:19:22.204213 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.75s 2025-05-31 16:19:22.204239 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.69s 2025-05-31 16:19:22.204249 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.60s 2025-05-31 16:19:22.204259 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.54s 2025-05-31 16:19:22.204269 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.52s 2025-05-31 16:19:22.204278 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.43s 2025-05-31 16:19:22.204288 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.27s 2025-05-31 16:19:22.204297 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.15s 2025-05-31 16:19:22.204306 | orchestrator | ovn-db : Get OVN_Northbound cluster leader ------------------------------ 1.14s 2025-05-31 16:19:22.204316 | orchestrator | 2025-05-31 16:19:22 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:19:22.204326 | orchestrator | 2025-05-31 16:19:22 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:19:25.238258 | orchestrator | 2025-05-31 16:19:25 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:19:25.238576 | orchestrator | 2025-05-31 16:19:25 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:19:25.238996 | orchestrator | 2025-05-31 16:19:25 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:19:25.239101 | orchestrator | 2025-05-31 16:19:25 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:19:28.292274 | orchestrator | 2025-05-31 16:19:28 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:19:28.292461 | orchestrator | 2025-05-31 16:19:28 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:19:28.294204 | orchestrator | 2025-05-31 16:19:28 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:19:28.295176 | orchestrator | 2025-05-31 16:19:28 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:19:31.348393 | orchestrator | 2025-05-31 16:19:31 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:19:31.348906 | orchestrator | 2025-05-31 16:19:31 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:19:31.349425 | orchestrator | 2025-05-31 16:19:31 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:19:31.349457 | orchestrator | 2025-05-31 16:19:31 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:19:34.399508 | orchestrator | 2025-05-31 16:19:34 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:19:34.401441 | orchestrator | 2025-05-31 16:19:34 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:19:34.403522 | orchestrator | 2025-05-31 16:19:34 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:19:34.404000 | orchestrator | 2025-05-31 16:19:34 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:19:37.455948 | orchestrator | 2025-05-31 16:19:37 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:19:37.458725 | orchestrator | 2025-05-31 16:19:37 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:19:37.458766 | orchestrator | 2025-05-31 16:19:37 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:19:37.458779 | orchestrator | 2025-05-31 16:19:37 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:19:40.514975 | orchestrator | 2025-05-31 16:19:40 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:19:40.515072 | orchestrator | 2025-05-31 16:19:40 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:19:40.515087 | orchestrator | 2025-05-31 16:19:40 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:19:40.515099 | orchestrator | 2025-05-31 16:19:40 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:19:43.566623 | orchestrator | 2025-05-31 16:19:43 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:19:43.567516 | orchestrator | 2025-05-31 16:19:43 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:19:43.568761 | orchestrator | 2025-05-31 16:19:43 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:19:43.568794 | orchestrator | 2025-05-31 16:19:43 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:19:46.618320 | orchestrator | 2025-05-31 16:19:46 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:19:46.618855 | orchestrator | 2025-05-31 16:19:46 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:19:46.620624 | orchestrator | 2025-05-31 16:19:46 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:19:46.620673 | orchestrator | 2025-05-31 16:19:46 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:19:49.672133 | orchestrator | 2025-05-31 16:19:49 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:19:49.672791 | orchestrator | 2025-05-31 16:19:49 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:19:49.673896 | orchestrator | 2025-05-31 16:19:49 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:19:49.673925 | orchestrator | 2025-05-31 16:19:49 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:19:52.716668 | orchestrator | 2025-05-31 16:19:52 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:19:52.717965 | orchestrator | 2025-05-31 16:19:52 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:19:52.719755 | orchestrator | 2025-05-31 16:19:52 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:19:52.719787 | orchestrator | 2025-05-31 16:19:52 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:19:55.770368 | orchestrator | 2025-05-31 16:19:55 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:19:55.772279 | orchestrator | 2025-05-31 16:19:55 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:19:55.777101 | orchestrator | 2025-05-31 16:19:55 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:19:55.778123 | orchestrator | 2025-05-31 16:19:55 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:19:58.839103 | orchestrator | 2025-05-31 16:19:58 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:19:58.848874 | orchestrator | 2025-05-31 16:19:58 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:19:58.851376 | orchestrator | 2025-05-31 16:19:58 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:19:58.852166 | orchestrator | 2025-05-31 16:19:58 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:20:01.908534 | orchestrator | 2025-05-31 16:20:01 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:20:01.910847 | orchestrator | 2025-05-31 16:20:01 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:20:01.912328 | orchestrator | 2025-05-31 16:20:01 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:20:01.912611 | orchestrator | 2025-05-31 16:20:01 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:20:04.966410 | orchestrator | 2025-05-31 16:20:04 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:20:04.967706 | orchestrator | 2025-05-31 16:20:04 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:20:04.969439 | orchestrator | 2025-05-31 16:20:04 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:20:04.969697 | orchestrator | 2025-05-31 16:20:04 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:20:08.003786 | orchestrator | 2025-05-31 16:20:07 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:20:08.004538 | orchestrator | 2025-05-31 16:20:08 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:20:08.005817 | orchestrator | 2025-05-31 16:20:08 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:20:08.005847 | orchestrator | 2025-05-31 16:20:08 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:20:11.060694 | orchestrator | 2025-05-31 16:20:11 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:20:11.069820 | orchestrator | 2025-05-31 16:20:11 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:20:11.069904 | orchestrator | 2025-05-31 16:20:11 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:20:11.069918 | orchestrator | 2025-05-31 16:20:11 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:20:14.109682 | orchestrator | 2025-05-31 16:20:14 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:20:14.110114 | orchestrator | 2025-05-31 16:20:14 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:20:14.111044 | orchestrator | 2025-05-31 16:20:14 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:20:14.111070 | orchestrator | 2025-05-31 16:20:14 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:20:17.139762 | orchestrator | 2025-05-31 16:20:17 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:20:17.140221 | orchestrator | 2025-05-31 16:20:17 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:20:17.140734 | orchestrator | 2025-05-31 16:20:17 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:20:17.140834 | orchestrator | 2025-05-31 16:20:17 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:20:20.177195 | orchestrator | 2025-05-31 16:20:20 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:20:20.180909 | orchestrator | 2025-05-31 16:20:20 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:20:20.183184 | orchestrator | 2025-05-31 16:20:20 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:20:20.183254 | orchestrator | 2025-05-31 16:20:20 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:20:23.216728 | orchestrator | 2025-05-31 16:20:23 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:20:23.217142 | orchestrator | 2025-05-31 16:20:23 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:20:23.218228 | orchestrator | 2025-05-31 16:20:23 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:20:23.218311 | orchestrator | 2025-05-31 16:20:23 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:20:26.259927 | orchestrator | 2025-05-31 16:20:26 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:20:26.260083 | orchestrator | 2025-05-31 16:20:26 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:20:26.261293 | orchestrator | 2025-05-31 16:20:26 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:20:26.261312 | orchestrator | 2025-05-31 16:20:26 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:20:29.299608 | orchestrator | 2025-05-31 16:20:29 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:20:29.306418 | orchestrator | 2025-05-31 16:20:29 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:20:29.306869 | orchestrator | 2025-05-31 16:20:29 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:20:29.306893 | orchestrator | 2025-05-31 16:20:29 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:20:32.353986 | orchestrator | 2025-05-31 16:20:32 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:20:32.356494 | orchestrator | 2025-05-31 16:20:32 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:20:32.358348 | orchestrator | 2025-05-31 16:20:32 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:20:32.358538 | orchestrator | 2025-05-31 16:20:32 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:20:35.402801 | orchestrator | 2025-05-31 16:20:35 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:20:35.402895 | orchestrator | 2025-05-31 16:20:35 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:20:35.404131 | orchestrator | 2025-05-31 16:20:35 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:20:35.404220 | orchestrator | 2025-05-31 16:20:35 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:20:38.445910 | orchestrator | 2025-05-31 16:20:38 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:20:38.447436 | orchestrator | 2025-05-31 16:20:38 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:20:38.449773 | orchestrator | 2025-05-31 16:20:38 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:20:38.449916 | orchestrator | 2025-05-31 16:20:38 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:20:41.494608 | orchestrator | 2025-05-31 16:20:41 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:20:41.494688 | orchestrator | 2025-05-31 16:20:41 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:20:41.494992 | orchestrator | 2025-05-31 16:20:41 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:20:41.495063 | orchestrator | 2025-05-31 16:20:41 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:20:44.546312 | orchestrator | 2025-05-31 16:20:44 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:20:44.546753 | orchestrator | 2025-05-31 16:20:44 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:20:44.548506 | orchestrator | 2025-05-31 16:20:44 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:20:44.548536 | orchestrator | 2025-05-31 16:20:44 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:20:47.585065 | orchestrator | 2025-05-31 16:20:47 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:20:47.586553 | orchestrator | 2025-05-31 16:20:47 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:20:47.588631 | orchestrator | 2025-05-31 16:20:47 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:20:47.588721 | orchestrator | 2025-05-31 16:20:47 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:20:50.652842 | orchestrator | 2025-05-31 16:20:50 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:20:50.652950 | orchestrator | 2025-05-31 16:20:50 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:20:50.653105 | orchestrator | 2025-05-31 16:20:50 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:20:50.653235 | orchestrator | 2025-05-31 16:20:50 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:20:53.713632 | orchestrator | 2025-05-31 16:20:53 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:20:53.714650 | orchestrator | 2025-05-31 16:20:53 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:20:53.715750 | orchestrator | 2025-05-31 16:20:53 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:20:53.715803 | orchestrator | 2025-05-31 16:20:53 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:20:56.752924 | orchestrator | 2025-05-31 16:20:56 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:20:56.754136 | orchestrator | 2025-05-31 16:20:56 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:20:56.755482 | orchestrator | 2025-05-31 16:20:56 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:20:56.755631 | orchestrator | 2025-05-31 16:20:56 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:20:59.801960 | orchestrator | 2025-05-31 16:20:59 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:20:59.802895 | orchestrator | 2025-05-31 16:20:59 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:20:59.804447 | orchestrator | 2025-05-31 16:20:59 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:20:59.804489 | orchestrator | 2025-05-31 16:20:59 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:21:02.863865 | orchestrator | 2025-05-31 16:21:02 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:21:02.865468 | orchestrator | 2025-05-31 16:21:02 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:21:02.867756 | orchestrator | 2025-05-31 16:21:02 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:21:02.867833 | orchestrator | 2025-05-31 16:21:02 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:21:05.923642 | orchestrator | 2025-05-31 16:21:05 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:21:05.928138 | orchestrator | 2025-05-31 16:21:05 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:21:05.930482 | orchestrator | 2025-05-31 16:21:05 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:21:05.931140 | orchestrator | 2025-05-31 16:21:05 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:21:08.992321 | orchestrator | 2025-05-31 16:21:08 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:21:08.992705 | orchestrator | 2025-05-31 16:21:08 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:21:08.993645 | orchestrator | 2025-05-31 16:21:08 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:21:08.993674 | orchestrator | 2025-05-31 16:21:08 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:21:12.040702 | orchestrator | 2025-05-31 16:21:12 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:21:12.041814 | orchestrator | 2025-05-31 16:21:12 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:21:12.044007 | orchestrator | 2025-05-31 16:21:12 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:21:12.044250 | orchestrator | 2025-05-31 16:21:12 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:21:15.106866 | orchestrator | 2025-05-31 16:21:15 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:21:15.107482 | orchestrator | 2025-05-31 16:21:15 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:21:15.108827 | orchestrator | 2025-05-31 16:21:15 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:21:15.108964 | orchestrator | 2025-05-31 16:21:15 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:21:18.148969 | orchestrator | 2025-05-31 16:21:18 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:21:18.150801 | orchestrator | 2025-05-31 16:21:18 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:21:18.152568 | orchestrator | 2025-05-31 16:21:18 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:21:18.152683 | orchestrator | 2025-05-31 16:21:18 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:21:21.221698 | orchestrator | 2025-05-31 16:21:21 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:21:21.224922 | orchestrator | 2025-05-31 16:21:21 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:21:21.226410 | orchestrator | 2025-05-31 16:21:21 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:21:21.226451 | orchestrator | 2025-05-31 16:21:21 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:21:24.281610 | orchestrator | 2025-05-31 16:21:24 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:21:24.282808 | orchestrator | 2025-05-31 16:21:24 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:21:24.284325 | orchestrator | 2025-05-31 16:21:24 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:21:24.284534 | orchestrator | 2025-05-31 16:21:24 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:21:27.343547 | orchestrator | 2025-05-31 16:21:27 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:21:27.348603 | orchestrator | 2025-05-31 16:21:27 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:21:27.350903 | orchestrator | 2025-05-31 16:21:27 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:21:27.351453 | orchestrator | 2025-05-31 16:21:27 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:21:30.401170 | orchestrator | 2025-05-31 16:21:30 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:21:30.402552 | orchestrator | 2025-05-31 16:21:30 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:21:30.404183 | orchestrator | 2025-05-31 16:21:30 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:21:30.406605 | orchestrator | 2025-05-31 16:21:30 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:21:33.458795 | orchestrator | 2025-05-31 16:21:33 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:21:33.461274 | orchestrator | 2025-05-31 16:21:33 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:21:33.464311 | orchestrator | 2025-05-31 16:21:33 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:21:33.465027 | orchestrator | 2025-05-31 16:21:33 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:21:36.505730 | orchestrator | 2025-05-31 16:21:36 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:21:36.506653 | orchestrator | 2025-05-31 16:21:36 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:21:36.508131 | orchestrator | 2025-05-31 16:21:36 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:21:36.508390 | orchestrator | 2025-05-31 16:21:36 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:21:39.559151 | orchestrator | 2025-05-31 16:21:39 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:21:39.559309 | orchestrator | 2025-05-31 16:21:39 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:21:39.559327 | orchestrator | 2025-05-31 16:21:39 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:21:39.559340 | orchestrator | 2025-05-31 16:21:39 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:21:42.610529 | orchestrator | 2025-05-31 16:21:42 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:21:42.612117 | orchestrator | 2025-05-31 16:21:42 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:21:42.613824 | orchestrator | 2025-05-31 16:21:42 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:21:42.613850 | orchestrator | 2025-05-31 16:21:42 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:21:45.658467 | orchestrator | 2025-05-31 16:21:45 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:21:45.659853 | orchestrator | 2025-05-31 16:21:45 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:21:45.661733 | orchestrator | 2025-05-31 16:21:45 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:21:45.661759 | orchestrator | 2025-05-31 16:21:45 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:21:48.706968 | orchestrator | 2025-05-31 16:21:48 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:21:48.708163 | orchestrator | 2025-05-31 16:21:48 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:21:48.710164 | orchestrator | 2025-05-31 16:21:48 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:21:48.710248 | orchestrator | 2025-05-31 16:21:48 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:21:51.747950 | orchestrator | 2025-05-31 16:21:51 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:21:51.748730 | orchestrator | 2025-05-31 16:21:51 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:21:51.750217 | orchestrator | 2025-05-31 16:21:51 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:21:51.750353 | orchestrator | 2025-05-31 16:21:51 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:21:54.807027 | orchestrator | 2025-05-31 16:21:54 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:21:54.808287 | orchestrator | 2025-05-31 16:21:54 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:21:54.810271 | orchestrator | 2025-05-31 16:21:54 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:21:54.810323 | orchestrator | 2025-05-31 16:21:54 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:21:57.874006 | orchestrator | 2025-05-31 16:21:57 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:21:57.874220 | orchestrator | 2025-05-31 16:21:57 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:21:57.875277 | orchestrator | 2025-05-31 16:21:57 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:21:57.875310 | orchestrator | 2025-05-31 16:21:57 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:22:00.917638 | orchestrator | 2025-05-31 16:22:00 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:22:00.918383 | orchestrator | 2025-05-31 16:22:00 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:22:00.923738 | orchestrator | 2025-05-31 16:22:00 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:22:00.923785 | orchestrator | 2025-05-31 16:22:00 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:22:03.963460 | orchestrator | 2025-05-31 16:22:03 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:22:03.964690 | orchestrator | 2025-05-31 16:22:03 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:22:03.965759 | orchestrator | 2025-05-31 16:22:03 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:22:03.965837 | orchestrator | 2025-05-31 16:22:03 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:22:07.012205 | orchestrator | 2025-05-31 16:22:07 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:22:07.013924 | orchestrator | 2025-05-31 16:22:07 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:22:07.016138 | orchestrator | 2025-05-31 16:22:07 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:22:07.016382 | orchestrator | 2025-05-31 16:22:07 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:22:10.067367 | orchestrator | 2025-05-31 16:22:10 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:22:10.067894 | orchestrator | 2025-05-31 16:22:10 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:22:10.069086 | orchestrator | 2025-05-31 16:22:10 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:22:10.069125 | orchestrator | 2025-05-31 16:22:10 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:22:13.112394 | orchestrator | 2025-05-31 16:22:13 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:22:13.115054 | orchestrator | 2025-05-31 16:22:13 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:22:13.116403 | orchestrator | 2025-05-31 16:22:13 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:22:13.116431 | orchestrator | 2025-05-31 16:22:13 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:22:16.163774 | orchestrator | 2025-05-31 16:22:16 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:22:16.164118 | orchestrator | 2025-05-31 16:22:16 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:22:16.164912 | orchestrator | 2025-05-31 16:22:16 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:22:16.165319 | orchestrator | 2025-05-31 16:22:16 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:22:19.209669 | orchestrator | 2025-05-31 16:22:19 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:22:19.210462 | orchestrator | 2025-05-31 16:22:19 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:22:19.213316 | orchestrator | 2025-05-31 16:22:19 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:22:19.213432 | orchestrator | 2025-05-31 16:22:19 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:22:22.256755 | orchestrator | 2025-05-31 16:22:22 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:22:22.257707 | orchestrator | 2025-05-31 16:22:22 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:22:22.260249 | orchestrator | 2025-05-31 16:22:22 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:22:22.260347 | orchestrator | 2025-05-31 16:22:22 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:22:25.308178 | orchestrator | 2025-05-31 16:22:25 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:22:25.308550 | orchestrator | 2025-05-31 16:22:25 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:22:25.309193 | orchestrator | 2025-05-31 16:22:25 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:22:25.309218 | orchestrator | 2025-05-31 16:22:25 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:22:28.364113 | orchestrator | 2025-05-31 16:22:28 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:22:28.364215 | orchestrator | 2025-05-31 16:22:28 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:22:28.364229 | orchestrator | 2025-05-31 16:22:28 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:22:28.364241 | orchestrator | 2025-05-31 16:22:28 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:22:31.400263 | orchestrator | 2025-05-31 16:22:31 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:22:31.400867 | orchestrator | 2025-05-31 16:22:31 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:22:31.403608 | orchestrator | 2025-05-31 16:22:31 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state STARTED 2025-05-31 16:22:31.403648 | orchestrator | 2025-05-31 16:22:31 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:22:34.446258 | orchestrator | 2025-05-31 16:22:34 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:22:34.446351 | orchestrator | 2025-05-31 16:22:34 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:22:34.446366 | orchestrator | 2025-05-31 16:22:34 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:22:34.450377 | orchestrator | 2025-05-31 16:22:34 | INFO  | Task 559a1d95-a743-4e2e-b27f-82fce44305e2 is in state SUCCESS 2025-05-31 16:22:34.452930 | orchestrator | 2025-05-31 16:22:34.452980 | orchestrator | 2025-05-31 16:22:34.452993 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-31 16:22:34.453054 | orchestrator | 2025-05-31 16:22:34.453068 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-31 16:22:34.453080 | orchestrator | Saturday 31 May 2025 16:15:51 +0000 (0:00:00.300) 0:00:00.300 ********** 2025-05-31 16:22:34.453091 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:22:34.453103 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:22:34.453113 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:22:34.453124 | orchestrator | 2025-05-31 16:22:34.453135 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-31 16:22:34.453146 | orchestrator | Saturday 31 May 2025 16:15:52 +0000 (0:00:00.392) 0:00:00.692 ********** 2025-05-31 16:22:34.453157 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-05-31 16:22:34.453168 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-05-31 16:22:34.453179 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-05-31 16:22:34.453190 | orchestrator | 2025-05-31 16:22:34.453200 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-05-31 16:22:34.453211 | orchestrator | 2025-05-31 16:22:34.453221 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-31 16:22:34.453253 | orchestrator | Saturday 31 May 2025 16:15:52 +0000 (0:00:00.417) 0:00:01.110 ********** 2025-05-31 16:22:34.453265 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:22:34.453282 | orchestrator | 2025-05-31 16:22:34.453303 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-05-31 16:22:34.453322 | orchestrator | Saturday 31 May 2025 16:15:53 +0000 (0:00:00.967) 0:00:02.077 ********** 2025-05-31 16:22:34.453342 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:22:34.453528 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:22:34.453549 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:22:34.453567 | orchestrator | 2025-05-31 16:22:34.453585 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-05-31 16:22:34.453603 | orchestrator | Saturday 31 May 2025 16:15:54 +0000 (0:00:00.851) 0:00:02.929 ********** 2025-05-31 16:22:34.453623 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:22:34.453643 | orchestrator | 2025-05-31 16:22:34.453663 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-05-31 16:22:34.453682 | orchestrator | Saturday 31 May 2025 16:15:55 +0000 (0:00:00.778) 0:00:03.707 ********** 2025-05-31 16:22:34.453694 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:22:34.453706 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:22:34.453721 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:22:34.453740 | orchestrator | 2025-05-31 16:22:34.453759 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-05-31 16:22:34.453778 | orchestrator | Saturday 31 May 2025 16:15:56 +0000 (0:00:01.126) 0:00:04.834 ********** 2025-05-31 16:22:34.453799 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-31 16:22:34.453819 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-31 16:22:34.453840 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-31 16:22:34.453859 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-31 16:22:34.453877 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-31 16:22:34.453895 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-31 16:22:34.453913 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-31 16:22:34.453946 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-31 16:22:34.453966 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-31 16:22:34.453980 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-31 16:22:34.453991 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-31 16:22:34.454001 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-31 16:22:34.454141 | orchestrator | 2025-05-31 16:22:34.454152 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-31 16:22:34.454343 | orchestrator | Saturday 31 May 2025 16:16:00 +0000 (0:00:04.230) 0:00:09.064 ********** 2025-05-31 16:22:34.454362 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-31 16:22:34.454378 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-31 16:22:34.454394 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-31 16:22:34.454412 | orchestrator | 2025-05-31 16:22:34.454430 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-31 16:22:34.454447 | orchestrator | Saturday 31 May 2025 16:16:01 +0000 (0:00:00.835) 0:00:09.900 ********** 2025-05-31 16:22:34.454465 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-31 16:22:34.454482 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-31 16:22:34.454517 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-31 16:22:34.454537 | orchestrator | 2025-05-31 16:22:34.454558 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-31 16:22:34.454577 | orchestrator | Saturday 31 May 2025 16:16:02 +0000 (0:00:01.501) 0:00:11.401 ********** 2025-05-31 16:22:34.454596 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-05-31 16:22:34.454614 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.454665 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-05-31 16:22:34.454679 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.454690 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-05-31 16:22:34.454700 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.454711 | orchestrator | 2025-05-31 16:22:34.454722 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-05-31 16:22:34.454732 | orchestrator | Saturday 31 May 2025 16:16:03 +0000 (0:00:00.500) 0:00:11.902 ********** 2025-05-31 16:22:34.454783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-31 16:22:34.454803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-31 16:22:34.454815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-31 16:22:34.454827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-31 16:22:34.454924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-31 16:22:34.454986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-31 16:22:34.455168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-31 16:22:34.455198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d9ff3e45f1072147882bb3d3315c5953082d3245', '__omit_place_holder__d9ff3e45f1072147882bb3d3315c5953082d3245'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-31 16:22:34.455221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-31 16:22:34.455242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d9ff3e45f1072147882bb3d3315c5953082d3245', '__omit_place_holder__d9ff3e45f1072147882bb3d3315c5953082d3245'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-31 16:22:34.455273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-31 16:22:34.455308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d9ff3e45f1072147882bb3d3315c5953082d3245', '__omit_place_holder__d9ff3e45f1072147882bb3d3315c5953082d3245'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-31 16:22:34.455328 | orchestrator | 2025-05-31 16:22:34.455348 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-05-31 16:22:34.455368 | orchestrator | Saturday 31 May 2025 16:16:05 +0000 (0:00:02.284) 0:00:14.187 ********** 2025-05-31 16:22:34.455386 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:22:34.455405 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:22:34.455425 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:22:34.455436 | orchestrator | 2025-05-31 16:22:34.455470 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-05-31 16:22:34.455482 | orchestrator | Saturday 31 May 2025 16:16:07 +0000 (0:00:01.684) 0:00:15.871 ********** 2025-05-31 16:22:34.455493 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-05-31 16:22:34.455504 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-05-31 16:22:34.455515 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-05-31 16:22:34.455526 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-05-31 16:22:34.455536 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-05-31 16:22:34.455547 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-05-31 16:22:34.455557 | orchestrator | 2025-05-31 16:22:34.455574 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-05-31 16:22:34.455592 | orchestrator | Saturday 31 May 2025 16:16:09 +0000 (0:00:02.573) 0:00:18.445 ********** 2025-05-31 16:22:34.455639 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:22:34.455658 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:22:34.455673 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:22:34.455687 | orchestrator | 2025-05-31 16:22:34.455702 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-05-31 16:22:34.455716 | orchestrator | Saturday 31 May 2025 16:16:11 +0000 (0:00:01.980) 0:00:20.425 ********** 2025-05-31 16:22:34.455733 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:22:34.455749 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:22:34.455767 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:22:34.455825 | orchestrator | 2025-05-31 16:22:34.455879 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-05-31 16:22:34.455900 | orchestrator | Saturday 31 May 2025 16:16:14 +0000 (0:00:02.365) 0:00:22.791 ********** 2025-05-31 16:22:34.455920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-31 16:22:34.455937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-31 16:22:34.455965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-31 16:22:34.455975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-31 16:22:34.455997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-31 16:22:34.456032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-31 16:22:34.456046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-31 16:22:34.456057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-31 16:22:34.456074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-31 16:22:34.456089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d9ff3e45f1072147882bb3d3315c5953082d3245', '__omit_place_holder__d9ff3e45f1072147882bb3d3315c5953082d3245'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-31 16:22:34.456099 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.456110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d9ff3e45f1072147882bb3d3315c5953082d3245', '__omit_place_holder__d9ff3e45f1072147882bb3d3315c5953082d3245'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-31 16:22:34.456120 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.456137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d9ff3e45f1072147882bb3d3315c5953082d3245', '__omit_place_holder__d9ff3e45f1072147882bb3d3315c5953082d3245'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-31 16:22:34.456251 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.456269 | orchestrator | 2025-05-31 16:22:34.456282 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-05-31 16:22:34.456299 | orchestrator | Saturday 31 May 2025 16:16:17 +0000 (0:00:03.151) 0:00:25.942 ********** 2025-05-31 16:22:34.456315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-31 16:22:34.456332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-31 16:22:34.456367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-31 16:22:34.456378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-31 16:22:34.456396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-31 16:22:34.456407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-31 16:22:34.456417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-31 16:22:34.456428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-31 16:22:34.456444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-31 16:22:34.456459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d9ff3e45f1072147882bb3d3315c5953082d3245', '__omit_place_holder__d9ff3e45f1072147882bb3d3315c5953082d3245'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-31 16:22:34.456470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d9ff3e45f1072147882bb3d3315c5953082d3245', '__omit_place_holder__d9ff3e45f1072147882bb3d3315c5953082d3245'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-31 16:22:34.456489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d9ff3e45f1072147882bb3d3315c5953082d3245', '__omit_place_holder__d9ff3e45f1072147882bb3d3315c5953082d3245'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-31 16:22:34.456499 | orchestrator | 2025-05-31 16:22:34.456509 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-05-31 16:22:34.456518 | orchestrator | Saturday 31 May 2025 16:16:21 +0000 (0:00:03.973) 0:00:29.915 ********** 2025-05-31 16:22:34.456528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-31 16:22:34.456539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-31 16:22:34.456569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-31 16:22:34.456584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-31 16:22:34.456621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-31 16:22:34.456729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-31 16:22:34.456750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-31 16:22:34.456766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-31 16:22:34.456830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d9ff3e45f1072147882bb3d3315c5953082d3245', '__omit_place_holder__d9ff3e45f1072147882bb3d3315c5953082d3245'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-31 16:22:34.456847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d9ff3e45f1072147882bb3d3315c5953082d3245', '__omit_place_holder__d9ff3e45f1072147882bb3d3315c5953082d3245'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-31 16:22:34.456868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-31 16:22:34.456878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d9ff3e45f1072147882bb3d3315c5953082d3245', '__omit_place_holder__d9ff3e45f1072147882bb3d3315c5953082d3245'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-31 16:22:34.456888 | orchestrator | 2025-05-31 16:22:34.456898 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-05-31 16:22:34.456907 | orchestrator | Saturday 31 May 2025 16:16:24 +0000 (0:00:03.121) 0:00:33.037 ********** 2025-05-31 16:22:34.456924 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-31 16:22:34.456935 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-31 16:22:34.456945 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-31 16:22:34.456954 | orchestrator | 2025-05-31 16:22:34.456964 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-05-31 16:22:34.456973 | orchestrator | Saturday 31 May 2025 16:16:27 +0000 (0:00:02.992) 0:00:36.030 ********** 2025-05-31 16:22:34.456982 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-31 16:22:34.456997 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-31 16:22:34.457067 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-31 16:22:34.457079 | orchestrator | 2025-05-31 16:22:34.457089 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-05-31 16:22:34.457098 | orchestrator | Saturday 31 May 2025 16:16:31 +0000 (0:00:03.923) 0:00:39.954 ********** 2025-05-31 16:22:34.457108 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.457118 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.457157 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.457167 | orchestrator | 2025-05-31 16:22:34.457180 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-05-31 16:22:34.457195 | orchestrator | Saturday 31 May 2025 16:16:32 +0000 (0:00:00.749) 0:00:40.704 ********** 2025-05-31 16:22:34.457205 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-31 16:22:34.457234 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-31 16:22:34.457245 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-31 16:22:34.457254 | orchestrator | 2025-05-31 16:22:34.457264 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-05-31 16:22:34.457274 | orchestrator | Saturday 31 May 2025 16:16:34 +0000 (0:00:02.346) 0:00:43.050 ********** 2025-05-31 16:22:34.457284 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-31 16:22:34.457294 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-31 16:22:34.457303 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-31 16:22:34.457313 | orchestrator | 2025-05-31 16:22:34.457414 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-05-31 16:22:34.457427 | orchestrator | Saturday 31 May 2025 16:16:36 +0000 (0:00:01.987) 0:00:45.038 ********** 2025-05-31 16:22:34.457441 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-05-31 16:22:34.457456 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-05-31 16:22:34.457471 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-05-31 16:22:34.457482 | orchestrator | 2025-05-31 16:22:34.457492 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-05-31 16:22:34.457501 | orchestrator | Saturday 31 May 2025 16:16:38 +0000 (0:00:02.332) 0:00:47.370 ********** 2025-05-31 16:22:34.457511 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-05-31 16:22:34.457526 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-05-31 16:22:34.457535 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-05-31 16:22:34.457545 | orchestrator | 2025-05-31 16:22:34.457554 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-31 16:22:34.457563 | orchestrator | Saturday 31 May 2025 16:16:40 +0000 (0:00:02.104) 0:00:49.474 ********** 2025-05-31 16:22:34.457573 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:22:34.457582 | orchestrator | 2025-05-31 16:22:34.457591 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-05-31 16:22:34.457600 | orchestrator | Saturday 31 May 2025 16:16:41 +0000 (0:00:00.708) 0:00:50.183 ********** 2025-05-31 16:22:34.457608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-31 16:22:34.457630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-31 16:22:34.457639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-31 16:22:34.457647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-31 16:22:34.457656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-31 16:22:34.457667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-31 16:22:34.457675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-31 16:22:34.457694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-31 16:22:34.457703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-31 16:22:34.457711 | orchestrator | 2025-05-31 16:22:34.457719 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-05-31 16:22:34.457727 | orchestrator | Saturday 31 May 2025 16:16:45 +0000 (0:00:03.511) 0:00:53.694 ********** 2025-05-31 16:22:34.457735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-31 16:22:34.457743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-31 16:22:34.457751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-31 16:22:34.457759 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.457771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-31 16:22:34.457784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-31 16:22:34.457798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-31 16:22:34.457806 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.457814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-31 16:22:34.457823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-31 16:22:34.457831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-31 16:22:34.457839 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.457846 | orchestrator | 2025-05-31 16:22:34.457854 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-05-31 16:22:34.457862 | orchestrator | Saturday 31 May 2025 16:16:46 +0000 (0:00:01.105) 0:00:54.800 ********** 2025-05-31 16:22:34.457874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-31 16:22:34.457887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-31 16:22:34.457899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-31 16:22:34.457908 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.457955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-31 16:22:34.457965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-31 16:22:34.457973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-31 16:22:34.457981 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.457995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-31 16:22:34.458097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-31 16:22:34.458110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-31 16:22:34.458121 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.458134 | orchestrator | 2025-05-31 16:22:34.458145 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-05-31 16:22:34.458156 | orchestrator | Saturday 31 May 2025 16:16:47 +0000 (0:00:01.150) 0:00:55.951 ********** 2025-05-31 16:22:34.458170 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-31 16:22:34.458178 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-31 16:22:34.458186 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-31 16:22:34.458194 | orchestrator | 2025-05-31 16:22:34.458202 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-05-31 16:22:34.458209 | orchestrator | Saturday 31 May 2025 16:16:49 +0000 (0:00:01.806) 0:00:57.757 ********** 2025-05-31 16:22:34.458217 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-31 16:22:34.458225 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-31 16:22:34.458233 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-31 16:22:34.458241 | orchestrator | 2025-05-31 16:22:34.458248 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-05-31 16:22:34.458256 | orchestrator | Saturday 31 May 2025 16:16:51 +0000 (0:00:02.090) 0:00:59.847 ********** 2025-05-31 16:22:34.458263 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-31 16:22:34.458271 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-31 16:22:34.458279 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-31 16:22:34.458287 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-31 16:22:34.458294 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.458302 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-31 16:22:34.458309 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.458317 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-31 16:22:34.458330 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.458338 | orchestrator | 2025-05-31 16:22:34.458345 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-05-31 16:22:34.458353 | orchestrator | Saturday 31 May 2025 16:16:54 +0000 (0:00:03.267) 0:01:03.115 ********** 2025-05-31 16:22:34.458361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-31 16:22:34.458374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-31 16:22:34.458382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-31 16:22:34.458396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-31 16:22:34.458405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-31 16:22:34.458413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-31 16:22:34.458428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-31 16:22:34.458466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-31 16:22:34.458477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d9ff3e45f1072147882bb3d3315c5953082d3245', '__omit_place_holder__d9ff3e45f1072147882bb3d3315c5953082d3245'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-31 16:22:34.458491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d9ff3e45f1072147882bb3d3315c5953082d3245', '__omit_place_holder__d9ff3e45f1072147882bb3d3315c5953082d3245'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-31 16:22:34.458499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-31 16:22:34.458507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d9ff3e45f1072147882bb3d3315c5953082d3245', '__omit_place_holder__d9ff3e45f1072147882bb3d3315c5953082d3245'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-31 16:22:34.458520 | orchestrator | 2025-05-31 16:22:34.458528 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-05-31 16:22:34.458536 | orchestrator | Saturday 31 May 2025 16:16:57 +0000 (0:00:02.721) 0:01:05.836 ********** 2025-05-31 16:22:34.458544 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:22:34.458552 | orchestrator | 2025-05-31 16:22:34.458560 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-05-31 16:22:34.458569 | orchestrator | Saturday 31 May 2025 16:16:57 +0000 (0:00:00.612) 0:01:06.449 ********** 2025-05-31 16:22:34.458584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-31 16:22:34.458599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-31 16:22:34.458612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.458634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.458643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-31 16:22:34.458656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-31 16:22:34.458664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.458672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.458781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-31 16:22:34.458798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-31 16:22:34.458813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.458835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.458844 | orchestrator | 2025-05-31 16:22:34.458852 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-05-31 16:22:34.458864 | orchestrator | Saturday 31 May 2025 16:17:01 +0000 (0:00:03.643) 0:01:10.092 ********** 2025-05-31 16:22:34.458929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-31 16:22:34.458958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-31 16:22:34.458972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.458994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.459030 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.459045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-31 16:22:34.459069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-31 16:22:34.459083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.459103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.459116 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.459128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-31 16:22:34.459149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-31 16:22:34.459164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.459187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.459242 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.459253 | orchestrator | 2025-05-31 16:22:34.459262 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-05-31 16:22:34.459270 | orchestrator | Saturday 31 May 2025 16:17:02 +0000 (0:00:00.675) 0:01:10.767 ********** 2025-05-31 16:22:34.459278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-31 16:22:34.459287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-31 16:22:34.459296 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.459304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-31 16:22:34.459312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-31 16:22:34.459320 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.459328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-31 16:22:34.459340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-31 16:22:34.459348 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.459356 | orchestrator | 2025-05-31 16:22:34.459364 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-05-31 16:22:34.459378 | orchestrator | Saturday 31 May 2025 16:17:03 +0000 (0:00:00.995) 0:01:11.763 ********** 2025-05-31 16:22:34.459388 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:22:34.459396 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:22:34.459455 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:22:34.459467 | orchestrator | 2025-05-31 16:22:34.459478 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-05-31 16:22:34.459486 | orchestrator | Saturday 31 May 2025 16:17:04 +0000 (0:00:01.478) 0:01:13.241 ********** 2025-05-31 16:22:34.459496 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:22:34.459509 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:22:34.459518 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:22:34.459525 | orchestrator | 2025-05-31 16:22:34.459533 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-05-31 16:22:34.459547 | orchestrator | Saturday 31 May 2025 16:17:06 +0000 (0:00:01.749) 0:01:14.991 ********** 2025-05-31 16:22:34.459554 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:22:34.459562 | orchestrator | 2025-05-31 16:22:34.459570 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-05-31 16:22:34.459577 | orchestrator | Saturday 31 May 2025 16:17:07 +0000 (0:00:00.850) 0:01:15.841 ********** 2025-05-31 16:22:34.459595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-31 16:22:34.459605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.459614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.459627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-31 16:22:34.459635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.459661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.459670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-31 16:22:34.459678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.459686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.459694 | orchestrator | 2025-05-31 16:22:34.459702 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-05-31 16:22:34.459710 | orchestrator | Saturday 31 May 2025 16:17:10 +0000 (0:00:03.712) 0:01:19.553 ********** 2025-05-31 16:22:34.459722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-31 16:22:34.459742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.459750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.459758 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.459767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-31 16:22:34.459775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.459810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.459825 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.459843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-31 16:22:34.459853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.459862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.459870 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.459877 | orchestrator | 2025-05-31 16:22:34.459885 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-05-31 16:22:34.459893 | orchestrator | Saturday 31 May 2025 16:17:11 +0000 (0:00:00.632) 0:01:20.186 ********** 2025-05-31 16:22:34.459901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-31 16:22:34.459910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-31 16:22:34.459919 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.459927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-31 16:22:34.459941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-31 16:22:34.459949 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.459963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-31 16:22:34.459971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-31 16:22:34.459979 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.459987 | orchestrator | 2025-05-31 16:22:34.459995 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-05-31 16:22:34.460058 | orchestrator | Saturday 31 May 2025 16:17:12 +0000 (0:00:01.049) 0:01:21.236 ********** 2025-05-31 16:22:34.460069 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:22:34.460077 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:22:34.460085 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:22:34.460092 | orchestrator | 2025-05-31 16:22:34.460100 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-05-31 16:22:34.460108 | orchestrator | Saturday 31 May 2025 16:17:13 +0000 (0:00:01.158) 0:01:22.394 ********** 2025-05-31 16:22:34.460116 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:22:34.460123 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:22:34.460131 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:22:34.460139 | orchestrator | 2025-05-31 16:22:34.460147 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-05-31 16:22:34.460154 | orchestrator | Saturday 31 May 2025 16:17:15 +0000 (0:00:01.978) 0:01:24.372 ********** 2025-05-31 16:22:34.460162 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.460170 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.460288 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.460302 | orchestrator | 2025-05-31 16:22:34.460317 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-05-31 16:22:34.460326 | orchestrator | Saturday 31 May 2025 16:17:16 +0000 (0:00:00.536) 0:01:24.909 ********** 2025-05-31 16:22:34.460333 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:22:34.460341 | orchestrator | 2025-05-31 16:22:34.460349 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-05-31 16:22:34.460356 | orchestrator | Saturday 31 May 2025 16:17:17 +0000 (0:00:01.366) 0:01:26.275 ********** 2025-05-31 16:22:34.460365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-31 16:22:34.460374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-31 16:22:34.460392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-31 16:22:34.460399 | orchestrator | 2025-05-31 16:22:34.460405 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-05-31 16:22:34.460412 | orchestrator | Saturday 31 May 2025 16:17:20 +0000 (0:00:02.738) 0:01:29.014 ********** 2025-05-31 16:22:34.460419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-31 16:22:34.460426 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.460438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-31 16:22:34.460445 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.460452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-31 16:22:34.460463 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.460469 | orchestrator | 2025-05-31 16:22:34.460476 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-05-31 16:22:34.460482 | orchestrator | Saturday 31 May 2025 16:17:22 +0000 (0:00:01.660) 0:01:30.674 ********** 2025-05-31 16:22:34.460489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-31 16:22:34.460498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-31 16:22:34.460505 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.460516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-31 16:22:34.460523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-31 16:22:34.460530 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.460536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-31 16:22:34.460548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-31 16:22:34.460555 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.460562 | orchestrator | 2025-05-31 16:22:34.460568 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-05-31 16:22:34.460575 | orchestrator | Saturday 31 May 2025 16:17:24 +0000 (0:00:02.004) 0:01:32.678 ********** 2025-05-31 16:22:34.460581 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.460588 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.460595 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.460601 | orchestrator | 2025-05-31 16:22:34.460608 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-05-31 16:22:34.460619 | orchestrator | Saturday 31 May 2025 16:17:24 +0000 (0:00:00.743) 0:01:33.422 ********** 2025-05-31 16:22:34.460626 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.460632 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.460639 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.460645 | orchestrator | 2025-05-31 16:22:34.460652 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-05-31 16:22:34.460659 | orchestrator | Saturday 31 May 2025 16:17:25 +0000 (0:00:01.163) 0:01:34.585 ********** 2025-05-31 16:22:34.460665 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:22:34.460672 | orchestrator | 2025-05-31 16:22:34.460678 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-05-31 16:22:34.460685 | orchestrator | Saturday 31 May 2025 16:17:26 +0000 (0:00:00.900) 0:01:35.486 ********** 2025-05-31 16:22:34.460692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-31 16:22:34.460699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.460709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.460721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-31 16:22:34.460734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.460741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.460748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.460758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.460769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-31 16:22:34.460781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.460788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.460795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.460802 | orchestrator | 2025-05-31 16:22:34.460808 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-05-31 16:22:34.460815 | orchestrator | Saturday 31 May 2025 16:17:30 +0000 (0:00:03.707) 0:01:39.194 ********** 2025-05-31 16:22:34.460825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-31 16:22:34.460832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.460847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-31 16:22:34.460855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.460861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.460871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.460878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.460889 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.460900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.460907 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.460914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-31 16:22:34.460970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.460982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.460989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.461020 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.461028 | orchestrator | 2025-05-31 16:22:34.461034 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-05-31 16:22:34.461041 | orchestrator | Saturday 31 May 2025 16:17:31 +0000 (0:00:00.726) 0:01:39.920 ********** 2025-05-31 16:22:34.461049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-31 16:22:34.461067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-31 16:22:34.461078 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.461085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-31 16:22:34.461092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-31 16:22:34.461098 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.461105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-31 16:22:34.461113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-31 16:22:34.461124 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.461133 | orchestrator | 2025-05-31 16:22:34.461140 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-05-31 16:22:34.461146 | orchestrator | Saturday 31 May 2025 16:17:32 +0000 (0:00:00.778) 0:01:40.699 ********** 2025-05-31 16:22:34.461152 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:22:34.461159 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:22:34.461166 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:22:34.461172 | orchestrator | 2025-05-31 16:22:34.461179 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-05-31 16:22:34.461185 | orchestrator | Saturday 31 May 2025 16:17:33 +0000 (0:00:01.185) 0:01:41.885 ********** 2025-05-31 16:22:34.461191 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:22:34.461198 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:22:34.461204 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:22:34.461211 | orchestrator | 2025-05-31 16:22:34.461217 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-05-31 16:22:34.461224 | orchestrator | Saturday 31 May 2025 16:17:35 +0000 (0:00:01.799) 0:01:43.684 ********** 2025-05-31 16:22:34.461230 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.461237 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.461243 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.461250 | orchestrator | 2025-05-31 16:22:34.461256 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-05-31 16:22:34.461263 | orchestrator | Saturday 31 May 2025 16:17:35 +0000 (0:00:00.226) 0:01:43.911 ********** 2025-05-31 16:22:34.461269 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.461275 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.461282 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.461288 | orchestrator | 2025-05-31 16:22:34.461295 | orchestrator | TASK [include_role : designate] ************************************************ 2025-05-31 16:22:34.461302 | orchestrator | Saturday 31 May 2025 16:17:35 +0000 (0:00:00.295) 0:01:44.206 ********** 2025-05-31 16:22:34.461313 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:22:34.461319 | orchestrator | 2025-05-31 16:22:34.461326 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-05-31 16:22:34.461332 | orchestrator | Saturday 31 May 2025 16:17:36 +0000 (0:00:00.797) 0:01:45.004 ********** 2025-05-31 16:22:34.461343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-31 16:22:34.461826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-31 16:22:34.461847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.461855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.461863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.461870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.461890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.461899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-31 16:22:34.461919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-31 16:22:34.461927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.461934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.461941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.462145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.462157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.462171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-31 16:22:34.462178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-31 16:22:34.462185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.462192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.462207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.462214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.462221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.462228 | orchestrator | 2025-05-31 16:22:34.462239 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-05-31 16:22:34.462246 | orchestrator | Saturday 31 May 2025 16:17:40 +0000 (0:00:04.528) 0:01:49.532 ********** 2025-05-31 16:22:34.462253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-31 16:22:34.462260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-31 16:22:34.462272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.462279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.462286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.462297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.462304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-31 16:22:34.462347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.462366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-31 16:22:34.462373 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.462384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-31 16:22:34.462391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-31 16:22:34.462405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.462413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.462419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.462431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.462439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.462450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.462458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.462470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.462478 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.462486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.462499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.462507 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.462514 | orchestrator | 2025-05-31 16:22:34.462522 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-05-31 16:22:34.462529 | orchestrator | Saturday 31 May 2025 16:17:41 +0000 (0:00:00.787) 0:01:50.320 ********** 2025-05-31 16:22:34.462537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-31 16:22:34.462546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-31 16:22:34.462554 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.462561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-31 16:22:34.462569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-31 16:22:34.462576 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.462586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-31 16:22:34.462594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-31 16:22:34.462649 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.462658 | orchestrator | 2025-05-31 16:22:34.462665 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-05-31 16:22:34.462673 | orchestrator | Saturday 31 May 2025 16:17:43 +0000 (0:00:01.337) 0:01:51.657 ********** 2025-05-31 16:22:34.462682 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:22:34.462694 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:22:34.462701 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:22:34.462710 | orchestrator | 2025-05-31 16:22:34.462741 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-05-31 16:22:34.462750 | orchestrator | Saturday 31 May 2025 16:17:44 +0000 (0:00:01.148) 0:01:52.806 ********** 2025-05-31 16:22:34.462757 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:22:34.462768 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:22:34.462777 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:22:34.462785 | orchestrator | 2025-05-31 16:22:34.462792 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-05-31 16:22:34.462800 | orchestrator | Saturday 31 May 2025 16:17:46 +0000 (0:00:01.844) 0:01:54.651 ********** 2025-05-31 16:22:34.462807 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.462815 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.462822 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.462829 | orchestrator | 2025-05-31 16:22:34.462836 | orchestrator | TASK [include_role : glance] *************************************************** 2025-05-31 16:22:34.462852 | orchestrator | Saturday 31 May 2025 16:17:46 +0000 (0:00:00.389) 0:01:55.040 ********** 2025-05-31 16:22:34.462859 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:22:34.462865 | orchestrator | 2025-05-31 16:22:34.462872 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-05-31 16:22:34.462878 | orchestrator | Saturday 31 May 2025 16:17:47 +0000 (0:00:00.894) 0:01:55.935 ********** 2025-05-31 16:22:34.462886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-31 16:22:34.462899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-31 16:22:34.462932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-31 16:22:34.462949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-31 16:22:34.462963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-31 16:22:34.462978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-31 16:22:34.462986 | orchestrator | 2025-05-31 16:22:34.462993 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-05-31 16:22:34.463000 | orchestrator | Saturday 31 May 2025 16:17:51 +0000 (0:00:04.122) 0:02:00.057 ********** 2025-05-31 16:22:34.463063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-31 16:22:34.463077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-31 16:22:34.463085 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.463103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-31 16:22:34.463116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-31 16:22:34.463124 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.463135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-31 16:22:34.463152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-31 16:22:34.463159 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.463166 | orchestrator | 2025-05-31 16:22:34.463173 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-05-31 16:22:34.463179 | orchestrator | Saturday 31 May 2025 16:17:54 +0000 (0:00:02.960) 0:02:03.018 ********** 2025-05-31 16:22:34.463186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-31 16:22:34.463197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-31 16:22:34.463205 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.463212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-31 16:22:34.463227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-31 16:22:34.463234 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.463241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-31 16:22:34.463248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-31 16:22:34.463255 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.463261 | orchestrator | 2025-05-31 16:22:34.463268 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-05-31 16:22:34.463274 | orchestrator | Saturday 31 May 2025 16:17:57 +0000 (0:00:03.206) 0:02:06.225 ********** 2025-05-31 16:22:34.463281 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:22:34.463287 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:22:34.463294 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:22:34.463340 | orchestrator | 2025-05-31 16:22:34.463347 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-05-31 16:22:34.463353 | orchestrator | Saturday 31 May 2025 16:17:58 +0000 (0:00:01.087) 0:02:07.312 ********** 2025-05-31 16:22:34.463360 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:22:34.463366 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:22:34.463372 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:22:34.463378 | orchestrator | 2025-05-31 16:22:34.463384 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-05-31 16:22:34.463390 | orchestrator | Saturday 31 May 2025 16:18:00 +0000 (0:00:01.885) 0:02:09.197 ********** 2025-05-31 16:22:34.463396 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.463405 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.463416 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.463423 | orchestrator | 2025-05-31 16:22:34.463429 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-05-31 16:22:34.463440 | orchestrator | Saturday 31 May 2025 16:18:00 +0000 (0:00:00.350) 0:02:09.547 ********** 2025-05-31 16:22:34.463499 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:22:34.463505 | orchestrator | 2025-05-31 16:22:34.463512 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-05-31 16:22:34.463522 | orchestrator | Saturday 31 May 2025 16:18:01 +0000 (0:00:00.870) 0:02:10.418 ********** 2025-05-31 16:22:34.463538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-31 16:22:34.463548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-31 16:22:34.463563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-31 16:22:34.463570 | orchestrator | 2025-05-31 16:22:34.463576 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-05-31 16:22:34.463582 | orchestrator | Saturday 31 May 2025 16:18:06 +0000 (0:00:04.408) 0:02:14.826 ********** 2025-05-31 16:22:34.463588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-31 16:22:34.463595 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.463601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-31 16:22:34.463607 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.463614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-31 16:22:34.463624 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.463630 | orchestrator | 2025-05-31 16:22:34.463639 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-05-31 16:22:34.463645 | orchestrator | Saturday 31 May 2025 16:18:06 +0000 (0:00:00.468) 0:02:15.295 ********** 2025-05-31 16:22:34.463651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-31 16:22:34.463658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-31 16:22:34.463664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-31 16:22:34.463670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-31 16:22:34.463677 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.463683 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.463689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-31 16:22:34.463699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-31 16:22:34.463705 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.463711 | orchestrator | 2025-05-31 16:22:34.463718 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-05-31 16:22:34.463724 | orchestrator | Saturday 31 May 2025 16:18:07 +0000 (0:00:00.698) 0:02:15.993 ********** 2025-05-31 16:22:34.463730 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:22:34.463736 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:22:34.463742 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:22:34.463748 | orchestrator | 2025-05-31 16:22:34.463754 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-05-31 16:22:34.463760 | orchestrator | Saturday 31 May 2025 16:18:08 +0000 (0:00:01.093) 0:02:17.087 ********** 2025-05-31 16:22:34.463766 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:22:34.463772 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:22:34.463778 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:22:34.463784 | orchestrator | 2025-05-31 16:22:34.463790 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-05-31 16:22:34.463796 | orchestrator | Saturday 31 May 2025 16:18:10 +0000 (0:00:01.937) 0:02:19.024 ********** 2025-05-31 16:22:34.463802 | orchestrator | included: heat for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:22:34.463808 | orchestrator | 2025-05-31 16:22:34.463825 | orchestrator | TASK [haproxy-config : Copying over heat haproxy config] *********************** 2025-05-31 16:22:34.463831 | orchestrator | Saturday 31 May 2025 16:18:11 +0000 (0:00:01.230) 0:02:20.254 ********** 2025-05-31 16:22:34.463837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-05-31 16:22:34.463852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-05-31 16:22:34.463859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-05-31 16:22:34.465403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-05-31 16:22:34.465438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.465455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-05-31 16:22:34.465462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.465475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-05-31 16:22:34.465481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.465487 | orchestrator | 2025-05-31 16:22:34.465499 | orchestrator | TASK [haproxy-config : Add configuration for heat when using single external frontend] *** 2025-05-31 16:22:34.465505 | orchestrator | Saturday 31 May 2025 16:18:18 +0000 (0:00:06.443) 0:02:26.698 ********** 2025-05-31 16:22:34.465510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-05-31 16:22:34.465520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-05-31 16:22:34.465526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.465531 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.465540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-05-31 16:22:34.465549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-05-31 16:22:34.465555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.465564 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.465570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-05-31 16:22:34.465576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-05-31 16:22:34.465584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.465590 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.465595 | orchestrator | 2025-05-31 16:22:34.465601 | orchestrator | TASK [haproxy-config : Configuring firewall for heat] ************************** 2025-05-31 16:22:34.465606 | orchestrator | Saturday 31 May 2025 16:18:18 +0000 (0:00:00.738) 0:02:27.436 ********** 2025-05-31 16:22:34.465612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-31 16:22:34.465618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-31 16:22:34.465625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-31 16:22:34.465633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-31 16:22:34.465639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-31 16:22:34.465649 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.465654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-31 16:22:34.465660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-31 16:22:34.465665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-31 16:22:34.465726 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.465733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-31 16:22:34.465742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-31 16:22:34.465748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-31 16:22:34.465754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-31 16:22:34.465759 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.465767 | orchestrator | 2025-05-31 16:22:34.465777 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL users config] *************** 2025-05-31 16:22:34.465787 | orchestrator | Saturday 31 May 2025 16:18:20 +0000 (0:00:01.647) 0:02:29.084 ********** 2025-05-31 16:22:34.465793 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:22:34.465798 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:22:34.465803 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:22:34.465808 | orchestrator | 2025-05-31 16:22:34.465814 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL rules config] *************** 2025-05-31 16:22:34.465819 | orchestrator | Saturday 31 May 2025 16:18:21 +0000 (0:00:01.324) 0:02:30.409 ********** 2025-05-31 16:22:34.465824 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:22:34.465830 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:22:34.465835 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:22:34.465840 | orchestrator | 2025-05-31 16:22:34.465845 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-05-31 16:22:34.465851 | orchestrator | Saturday 31 May 2025 16:18:24 +0000 (0:00:02.480) 0:02:32.889 ********** 2025-05-31 16:22:34.465859 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:22:34.465865 | orchestrator | 2025-05-31 16:22:34.465870 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-05-31 16:22:34.465875 | orchestrator | Saturday 31 May 2025 16:18:25 +0000 (0:00:01.024) 0:02:33.914 ********** 2025-05-31 16:22:34.465887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-31 16:22:34.465902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-31 16:22:34.465913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-31 16:22:34.465924 | orchestrator | 2025-05-31 16:22:34.465929 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-05-31 16:22:34.465934 | orchestrator | Saturday 31 May 2025 16:18:29 +0000 (0:00:04.040) 0:02:37.955 ********** 2025-05-31 16:22:34.465944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-31 16:22:34.465954 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.465964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-31 16:22:34.465970 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.465980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-31 16:22:34.465991 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.465997 | orchestrator | 2025-05-31 16:22:34.466064 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-05-31 16:22:34.466077 | orchestrator | Saturday 31 May 2025 16:18:30 +0000 (0:00:00.971) 0:02:38.927 ********** 2025-05-31 16:22:34.466086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-31 16:22:34.466094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-31 16:22:34.466102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-31 16:22:34.466109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-31 16:22:34.466115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-31 16:22:34.466122 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.466128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-31 16:22:34.466134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-31 16:22:34.466141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-31 16:22:34.466150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-31 16:22:34.466163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-31 16:22:34.466169 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.466176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-31 16:22:34.466182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-31 16:22:34.466191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-31 16:22:34.466198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-31 16:22:34.466204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-31 16:22:34.466210 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.466216 | orchestrator | 2025-05-31 16:22:34.466222 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-05-31 16:22:34.466228 | orchestrator | Saturday 31 May 2025 16:18:31 +0000 (0:00:01.169) 0:02:40.096 ********** 2025-05-31 16:22:34.466234 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:22:34.466240 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:22:34.466246 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:22:34.466252 | orchestrator | 2025-05-31 16:22:34.466257 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-05-31 16:22:34.466263 | orchestrator | Saturday 31 May 2025 16:18:32 +0000 (0:00:01.365) 0:02:41.462 ********** 2025-05-31 16:22:34.466268 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:22:34.466273 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:22:34.466278 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:22:34.466284 | orchestrator | 2025-05-31 16:22:34.466289 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-05-31 16:22:34.466294 | orchestrator | Saturday 31 May 2025 16:18:35 +0000 (0:00:02.217) 0:02:43.679 ********** 2025-05-31 16:22:34.466299 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.466304 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.466310 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.466315 | orchestrator | 2025-05-31 16:22:34.466320 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-05-31 16:22:34.466326 | orchestrator | Saturday 31 May 2025 16:18:35 +0000 (0:00:00.440) 0:02:44.120 ********** 2025-05-31 16:22:34.466331 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.466336 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.466341 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.466350 | orchestrator | 2025-05-31 16:22:34.466356 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-05-31 16:22:34.466361 | orchestrator | Saturday 31 May 2025 16:18:35 +0000 (0:00:00.273) 0:02:44.394 ********** 2025-05-31 16:22:34.466366 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:22:34.466372 | orchestrator | 2025-05-31 16:22:34.466377 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-05-31 16:22:34.466382 | orchestrator | Saturday 31 May 2025 16:18:36 +0000 (0:00:01.170) 0:02:45.564 ********** 2025-05-31 16:22:34.466388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-31 16:22:34.466395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-31 16:22:34.466405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-31 16:22:34.466412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-31 16:22:34.466448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-31 16:22:34.466463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-31 16:22:34.466472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-31 16:22:34.466482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-31 16:22:34.466488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-31 16:22:34.466494 | orchestrator | 2025-05-31 16:22:34.466499 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-05-31 16:22:34.466505 | orchestrator | Saturday 31 May 2025 16:18:41 +0000 (0:00:04.135) 0:02:49.700 ********** 2025-05-31 16:22:34.466511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-31 16:22:34.466521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-31 16:22:34.466529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-31 16:22:34.466535 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.466544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-31 16:22:34.466550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-31 16:22:34.466556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-31 16:22:34.466568 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.466574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-31 16:22:34.466583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-31 16:22:34.466588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-31 16:22:34.466594 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.466599 | orchestrator | 2025-05-31 16:22:34.466605 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-05-31 16:22:34.466610 | orchestrator | Saturday 31 May 2025 16:18:41 +0000 (0:00:00.752) 0:02:50.453 ********** 2025-05-31 16:22:34.466619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-31 16:22:34.466625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-31 16:22:34.466630 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.466636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-31 16:22:34.466646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-31 16:22:34.466651 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.466657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-31 16:22:34.466663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-31 16:22:34.466668 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.466673 | orchestrator | 2025-05-31 16:22:34.466679 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-05-31 16:22:34.466684 | orchestrator | Saturday 31 May 2025 16:18:43 +0000 (0:00:01.380) 0:02:51.834 ********** 2025-05-31 16:22:34.466689 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:22:34.466695 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:22:34.466700 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:22:34.466705 | orchestrator | 2025-05-31 16:22:34.466711 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-05-31 16:22:34.466716 | orchestrator | Saturday 31 May 2025 16:18:44 +0000 (0:00:01.406) 0:02:53.240 ********** 2025-05-31 16:22:34.466721 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:22:34.466727 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:22:34.466732 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:22:34.466737 | orchestrator | 2025-05-31 16:22:34.466743 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-05-31 16:22:34.466748 | orchestrator | Saturday 31 May 2025 16:18:47 +0000 (0:00:02.387) 0:02:55.628 ********** 2025-05-31 16:22:34.466753 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.466758 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.466764 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.466769 | orchestrator | 2025-05-31 16:22:34.466774 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-05-31 16:22:34.466780 | orchestrator | Saturday 31 May 2025 16:18:47 +0000 (0:00:00.298) 0:02:55.927 ********** 2025-05-31 16:22:34.466788 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:22:34.466793 | orchestrator | 2025-05-31 16:22:34.466799 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-05-31 16:22:34.466804 | orchestrator | Saturday 31 May 2025 16:18:48 +0000 (0:00:01.344) 0:02:57.272 ********** 2025-05-31 16:22:34.466810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-31 16:22:34.466819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.466829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-31 16:22:34.466835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.466843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-31 16:22:34.466849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.466859 | orchestrator | 2025-05-31 16:22:34.466864 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-05-31 16:22:34.466869 | orchestrator | Saturday 31 May 2025 16:18:53 +0000 (0:00:04.952) 0:03:02.225 ********** 2025-05-31 16:22:34.466878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-31 16:22:34.466885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.466890 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.466896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-31 16:22:34.466904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.466910 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.466925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-31 16:22:34.466936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.466946 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.466952 | orchestrator | 2025-05-31 16:22:34.466957 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-05-31 16:22:34.466962 | orchestrator | Saturday 31 May 2025 16:18:54 +0000 (0:00:00.683) 0:03:02.909 ********** 2025-05-31 16:22:34.466968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-31 16:22:34.466974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-31 16:22:34.466979 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.466984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-31 16:22:34.466990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-31 16:22:34.466995 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.467001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-31 16:22:34.467052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-31 16:22:34.467058 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.467064 | orchestrator | 2025-05-31 16:22:34.467069 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-05-31 16:22:34.467074 | orchestrator | Saturday 31 May 2025 16:18:55 +0000 (0:00:01.085) 0:03:03.994 ********** 2025-05-31 16:22:34.467080 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:22:34.467085 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:22:34.467090 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:22:34.467096 | orchestrator | 2025-05-31 16:22:34.467101 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-05-31 16:22:34.467111 | orchestrator | Saturday 31 May 2025 16:18:56 +0000 (0:00:01.020) 0:03:05.015 ********** 2025-05-31 16:22:34.467121 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:22:34.467126 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:22:34.467131 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:22:34.467137 | orchestrator | 2025-05-31 16:22:34.467142 | orchestrator | TASK [include_role : manila] *************************************************** 2025-05-31 16:22:34.467147 | orchestrator | Saturday 31 May 2025 16:18:58 +0000 (0:00:01.861) 0:03:06.876 ********** 2025-05-31 16:22:34.467153 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:22:34.467158 | orchestrator | 2025-05-31 16:22:34.467163 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-05-31 16:22:34.467168 | orchestrator | Saturday 31 May 2025 16:18:59 +0000 (0:00:00.982) 0:03:07.858 ********** 2025-05-31 16:22:34.467179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-31 16:22:34.467185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.467191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.467196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.467205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-31 16:22:34.467215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.467220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.467229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.467235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-31 16:22:34.467241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.467246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.467259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.467265 | orchestrator | 2025-05-31 16:22:34.467270 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-05-31 16:22:34.467276 | orchestrator | Saturday 31 May 2025 16:19:02 +0000 (0:00:03.596) 0:03:11.455 ********** 2025-05-31 16:22:34.467284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-31 16:22:34.467290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.467296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.467301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.467310 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.467319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-31 16:22:34.467325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.467334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.467340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.467345 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.467351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-31 16:22:34.467360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.467368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.467391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.467397 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.467402 | orchestrator | 2025-05-31 16:22:34.467408 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-05-31 16:22:34.467413 | orchestrator | Saturday 31 May 2025 16:19:03 +0000 (0:00:00.759) 0:03:12.215 ********** 2025-05-31 16:22:34.467419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-31 16:22:34.467427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-31 16:22:34.467433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-31 16:22:34.467439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-31 16:22:34.467444 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.467450 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.467455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-31 16:22:34.467460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-31 16:22:34.467466 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.467471 | orchestrator | 2025-05-31 16:22:34.467477 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-05-31 16:22:34.467482 | orchestrator | Saturday 31 May 2025 16:19:04 +0000 (0:00:01.090) 0:03:13.305 ********** 2025-05-31 16:22:34.467492 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:22:34.467497 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:22:34.467503 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:22:34.467508 | orchestrator | 2025-05-31 16:22:34.467513 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-05-31 16:22:34.467519 | orchestrator | Saturday 31 May 2025 16:19:06 +0000 (0:00:01.357) 0:03:14.663 ********** 2025-05-31 16:22:34.467524 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:22:34.467529 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:22:34.467534 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:22:34.467540 | orchestrator | 2025-05-31 16:22:34.467545 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-05-31 16:22:34.467550 | orchestrator | Saturday 31 May 2025 16:19:08 +0000 (0:00:02.169) 0:03:16.832 ********** 2025-05-31 16:22:34.467556 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:22:34.467561 | orchestrator | 2025-05-31 16:22:34.467566 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-05-31 16:22:34.467571 | orchestrator | Saturday 31 May 2025 16:19:09 +0000 (0:00:01.297) 0:03:18.130 ********** 2025-05-31 16:22:34.467575 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-31 16:22:34.467580 | orchestrator | 2025-05-31 16:22:34.467585 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-05-31 16:22:34.467590 | orchestrator | Saturday 31 May 2025 16:19:12 +0000 (0:00:02.997) 0:03:21.127 ********** 2025-05-31 16:22:34.467598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-31 16:22:34.467607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-31 16:22:34.467615 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.467621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-31 16:22:34.467629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-31 16:22:34.467634 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.467642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-31 16:22:34.467651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-31 16:22:34.467656 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.467661 | orchestrator | 2025-05-31 16:22:34.467666 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-05-31 16:22:34.467671 | orchestrator | Saturday 31 May 2025 16:19:15 +0000 (0:00:02.890) 0:03:24.018 ********** 2025-05-31 16:22:34.467680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-31 16:22:34.467689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-31 16:22:34.467694 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.467703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-31 16:22:34.467708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-31 16:22:34.467713 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.467724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-31 16:22:34.467733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-31 16:22:34.467738 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.467743 | orchestrator | 2025-05-31 16:22:34.467748 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-05-31 16:22:34.467753 | orchestrator | Saturday 31 May 2025 16:19:18 +0000 (0:00:02.691) 0:03:26.710 ********** 2025-05-31 16:22:34.467758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-31 16:22:34.467763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-31 16:22:34.467771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-31 16:22:34.467777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-31 16:22:34.467782 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.467786 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.467794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-31 16:22:34.467804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-31 16:22:34.467809 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.467813 | orchestrator | 2025-05-31 16:22:34.467818 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-05-31 16:22:34.467823 | orchestrator | Saturday 31 May 2025 16:19:20 +0000 (0:00:02.719) 0:03:29.430 ********** 2025-05-31 16:22:34.467828 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:22:34.467832 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:22:34.467837 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:22:34.467842 | orchestrator | 2025-05-31 16:22:34.467846 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-05-31 16:22:34.467851 | orchestrator | Saturday 31 May 2025 16:19:22 +0000 (0:00:01.794) 0:03:31.224 ********** 2025-05-31 16:22:34.467856 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.467861 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.467865 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.467870 | orchestrator | 2025-05-31 16:22:34.467875 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-05-31 16:22:34.467880 | orchestrator | Saturday 31 May 2025 16:19:23 +0000 (0:00:01.242) 0:03:32.466 ********** 2025-05-31 16:22:34.467884 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.467889 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.467894 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.467898 | orchestrator | 2025-05-31 16:22:34.467903 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-05-31 16:22:34.467908 | orchestrator | Saturday 31 May 2025 16:19:24 +0000 (0:00:00.390) 0:03:32.856 ********** 2025-05-31 16:22:34.467912 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:22:34.467917 | orchestrator | 2025-05-31 16:22:34.467922 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-05-31 16:22:34.467927 | orchestrator | Saturday 31 May 2025 16:19:25 +0000 (0:00:01.432) 0:03:34.288 ********** 2025-05-31 16:22:34.467934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-31 16:22:34.467939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-31 16:22:34.467951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-31 16:22:34.467956 | orchestrator | 2025-05-31 16:22:34.467961 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-05-31 16:22:34.467966 | orchestrator | Saturday 31 May 2025 16:19:27 +0000 (0:00:01.875) 0:03:36.163 ********** 2025-05-31 16:22:34.467971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-31 16:22:34.467976 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.467981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-31 16:22:34.467986 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.467993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-31 16:22:34.468016 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.468023 | orchestrator | 2025-05-31 16:22:34.468027 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-05-31 16:22:34.468032 | orchestrator | Saturday 31 May 2025 16:19:27 +0000 (0:00:00.369) 0:03:36.533 ********** 2025-05-31 16:22:34.468037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-31 16:22:34.468042 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.468047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-31 16:22:34.468052 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.468057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-31 16:22:34.468062 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.468066 | orchestrator | 2025-05-31 16:22:34.468074 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-05-31 16:22:34.468079 | orchestrator | Saturday 31 May 2025 16:19:28 +0000 (0:00:00.916) 0:03:37.450 ********** 2025-05-31 16:22:34.468083 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.468088 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.468093 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.468098 | orchestrator | 2025-05-31 16:22:34.468102 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-05-31 16:22:34.468107 | orchestrator | Saturday 31 May 2025 16:19:29 +0000 (0:00:00.847) 0:03:38.297 ********** 2025-05-31 16:22:34.468112 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.468116 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.468121 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.468126 | orchestrator | 2025-05-31 16:22:34.468130 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-05-31 16:22:34.468135 | orchestrator | Saturday 31 May 2025 16:19:31 +0000 (0:00:01.469) 0:03:39.767 ********** 2025-05-31 16:22:34.468140 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.468144 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.468149 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.468154 | orchestrator | 2025-05-31 16:22:34.468158 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-05-31 16:22:34.468163 | orchestrator | Saturday 31 May 2025 16:19:31 +0000 (0:00:00.318) 0:03:40.086 ********** 2025-05-31 16:22:34.468168 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:22:34.468173 | orchestrator | 2025-05-31 16:22:34.468177 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-05-31 16:22:34.468182 | orchestrator | Saturday 31 May 2025 16:19:32 +0000 (0:00:01.447) 0:03:41.533 ********** 2025-05-31 16:22:34.468187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-31 16:22:34.468200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.468206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.468214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.468219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:22:34.468224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.468234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:22:34.468239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:22:34.468248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.468256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:22:34.468261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.468266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:22:34.468271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:22:34.468279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.468288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-31 16:22:34.468296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:22:34.468302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:22:34.468307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.468315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.468323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.468328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.468424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:22:34.468433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.468445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:22:34.468453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:22:34.468462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.468467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:22:34.468475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.468481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:22:34.468486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:22:34.468494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.468499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:22:34.468505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:22:34.468512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.468545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-31 16:22:34.468558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.468564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.468586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.468595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:22:34.468601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.468611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:22:34.468616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:22:34.468621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.468629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:22:34.468634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.468642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:22:34.468647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:22:34.468656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.468661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:22:34.468669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:22:34.468675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.468679 | orchestrator | 2025-05-31 16:22:34.468685 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-05-31 16:22:34.468690 | orchestrator | Saturday 31 May 2025 16:19:38 +0000 (0:00:05.449) 0:03:46.983 ********** 2025-05-31 16:22:34.468698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:22:34.468707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.468712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.468720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:22:34.468725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.468733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.468742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:22:34.468747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.468754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.468760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:22:34.468767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:22:34.468776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.468781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:22:34.468786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.468794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:22:34.468799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.468806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.468815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.468820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:22:34.468825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.468834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:22:34.468839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.468846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:22:34.468855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:22:34.468860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.468866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:22:34.468871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:22:34.468878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:22:34.468883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.468894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.468899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:22:34.468904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:22:34.468909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:22:34.468917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.468922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:22:34.468934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.468939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:22:34.468944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.468949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.468957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:22:34.468962 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.468971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:22:34.468976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:22:34.468983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:22:34.468988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.468993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.469001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:22:34.469029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:22:34.469037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:22:34.469043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:22:34.469049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.469054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.469060 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.469065 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.469071 | orchestrator | 2025-05-31 16:22:34.469076 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-05-31 16:22:34.469084 | orchestrator | Saturday 31 May 2025 16:19:40 +0000 (0:00:01.710) 0:03:48.693 ********** 2025-05-31 16:22:34.469092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-31 16:22:34.469098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-31 16:22:34.469104 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.469109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-31 16:22:34.469115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-31 16:22:34.469120 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.469126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-31 16:22:34.469131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-31 16:22:34.469137 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.469142 | orchestrator | 2025-05-31 16:22:34.469147 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-05-31 16:22:34.469155 | orchestrator | Saturday 31 May 2025 16:19:42 +0000 (0:00:02.061) 0:03:50.755 ********** 2025-05-31 16:22:34.469161 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:22:34.469166 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:22:34.469171 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:22:34.469176 | orchestrator | 2025-05-31 16:22:34.469182 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-05-31 16:22:34.469187 | orchestrator | Saturday 31 May 2025 16:19:43 +0000 (0:00:01.528) 0:03:52.283 ********** 2025-05-31 16:22:34.469192 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:22:34.469197 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:22:34.469203 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:22:34.469208 | orchestrator | 2025-05-31 16:22:34.469213 | orchestrator | TASK [include_role : placement] ************************************************ 2025-05-31 16:22:34.469219 | orchestrator | Saturday 31 May 2025 16:19:45 +0000 (0:00:02.319) 0:03:54.603 ********** 2025-05-31 16:22:34.469224 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:22:34.469229 | orchestrator | 2025-05-31 16:22:34.469235 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-05-31 16:22:34.469240 | orchestrator | Saturday 31 May 2025 16:19:47 +0000 (0:00:01.473) 0:03:56.076 ********** 2025-05-31 16:22:34.469246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-31 16:22:34.469255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-31 16:22:34.469264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-31 16:22:34.469269 | orchestrator | 2025-05-31 16:22:34.469275 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-05-31 16:22:34.469280 | orchestrator | Saturday 31 May 2025 16:19:50 +0000 (0:00:03.492) 0:03:59.569 ********** 2025-05-31 16:22:34.469289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-31 16:22:34.469295 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.469301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-31 16:22:34.469309 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.469314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-31 16:22:34.469319 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.469324 | orchestrator | 2025-05-31 16:22:34.469328 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-05-31 16:22:34.469333 | orchestrator | Saturday 31 May 2025 16:19:51 +0000 (0:00:00.775) 0:04:00.344 ********** 2025-05-31 16:22:34.469342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-31 16:22:34.469348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-31 16:22:34.469352 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.469357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-31 16:22:34.469362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-31 16:22:34.469367 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.469372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-31 16:22:34.469444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-31 16:22:34.469451 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.469456 | orchestrator | 2025-05-31 16:22:34.469461 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-05-31 16:22:34.469466 | orchestrator | Saturday 31 May 2025 16:19:52 +0000 (0:00:01.092) 0:04:01.437 ********** 2025-05-31 16:22:34.469471 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:22:34.469479 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:22:34.469486 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:22:34.469490 | orchestrator | 2025-05-31 16:22:34.469495 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-05-31 16:22:34.469500 | orchestrator | Saturday 31 May 2025 16:19:54 +0000 (0:00:01.410) 0:04:02.847 ********** 2025-05-31 16:22:34.469504 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:22:34.469509 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:22:34.469514 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:22:34.469518 | orchestrator | 2025-05-31 16:22:34.469523 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-05-31 16:22:34.469532 | orchestrator | Saturday 31 May 2025 16:19:56 +0000 (0:00:02.294) 0:04:05.142 ********** 2025-05-31 16:22:34.469537 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:22:34.469542 | orchestrator | 2025-05-31 16:22:34.469546 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-05-31 16:22:34.469551 | orchestrator | Saturday 31 May 2025 16:19:58 +0000 (0:00:01.730) 0:04:06.872 ********** 2025-05-31 16:22:34.469556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-31 16:22:34.469566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.469572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.469580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-31 16:22:34.469589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.469594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.469602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-31 16:22:34.469608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.469615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.469624 | orchestrator | 2025-05-31 16:22:34.469629 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-05-31 16:22:34.469634 | orchestrator | Saturday 31 May 2025 16:20:03 +0000 (0:00:05.033) 0:04:11.906 ********** 2025-05-31 16:22:34.469639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-31 16:22:34.469644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.469652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.469657 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.469665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-31 16:22:34.469674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.469679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.469684 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.469689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-31 16:22:34.469697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.469702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.469707 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.469712 | orchestrator | 2025-05-31 16:22:34.469717 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-05-31 16:22:34.469725 | orchestrator | Saturday 31 May 2025 16:20:04 +0000 (0:00:00.845) 0:04:12.751 ********** 2025-05-31 16:22:34.469732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-31 16:22:34.469738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-31 16:22:34.469743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-31 16:22:34.469748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-31 16:22:34.469753 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.469758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-31 16:22:34.469762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-31 16:22:34.469767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-31 16:22:34.469772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-31 16:22:34.469777 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.469782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-31 16:22:34.469786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-31 16:22:34.469791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-31 16:22:34.469796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-31 16:22:34.469816 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.469821 | orchestrator | 2025-05-31 16:22:34.469826 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-05-31 16:22:34.469833 | orchestrator | Saturday 31 May 2025 16:20:05 +0000 (0:00:01.287) 0:04:14.038 ********** 2025-05-31 16:22:34.469838 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:22:34.469843 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:22:34.469847 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:22:34.469852 | orchestrator | 2025-05-31 16:22:34.469857 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-05-31 16:22:34.469861 | orchestrator | Saturday 31 May 2025 16:20:06 +0000 (0:00:01.388) 0:04:15.427 ********** 2025-05-31 16:22:34.469866 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:22:34.469875 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:22:34.469879 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:22:34.469884 | orchestrator | 2025-05-31 16:22:34.469889 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-05-31 16:22:34.469893 | orchestrator | Saturday 31 May 2025 16:20:09 +0000 (0:00:02.220) 0:04:17.648 ********** 2025-05-31 16:22:34.469898 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:22:34.469903 | orchestrator | 2025-05-31 16:22:34.469907 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-05-31 16:22:34.469912 | orchestrator | Saturday 31 May 2025 16:20:10 +0000 (0:00:01.392) 0:04:19.041 ********** 2025-05-31 16:22:34.469917 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-05-31 16:22:34.469922 | orchestrator | 2025-05-31 16:22:34.469926 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-05-31 16:22:34.469931 | orchestrator | Saturday 31 May 2025 16:20:11 +0000 (0:00:01.552) 0:04:20.593 ********** 2025-05-31 16:22:34.469939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-31 16:22:34.469944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-31 16:22:34.469949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-31 16:22:34.469954 | orchestrator | 2025-05-31 16:22:34.469959 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-05-31 16:22:34.469964 | orchestrator | Saturday 31 May 2025 16:20:16 +0000 (0:00:04.852) 0:04:25.445 ********** 2025-05-31 16:22:34.469969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-31 16:22:34.469974 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.469979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-31 16:22:34.469988 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.469995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-31 16:22:34.470000 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.470091 | orchestrator | 2025-05-31 16:22:34.470097 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-05-31 16:22:34.470103 | orchestrator | Saturday 31 May 2025 16:20:17 +0000 (0:00:01.110) 0:04:26.556 ********** 2025-05-31 16:22:34.470108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-31 16:22:34.470114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-31 16:22:34.470120 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.470125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-31 16:22:34.470136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-31 16:22:34.470142 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.470148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-31 16:22:34.470154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-31 16:22:34.470159 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.470164 | orchestrator | 2025-05-31 16:22:34.470169 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-31 16:22:34.470175 | orchestrator | Saturday 31 May 2025 16:20:19 +0000 (0:00:01.625) 0:04:28.181 ********** 2025-05-31 16:22:34.470180 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:22:34.470185 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:22:34.470191 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:22:34.470196 | orchestrator | 2025-05-31 16:22:34.470202 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-31 16:22:34.470208 | orchestrator | Saturday 31 May 2025 16:20:22 +0000 (0:00:02.783) 0:04:30.965 ********** 2025-05-31 16:22:34.470213 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:22:34.470219 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:22:34.470224 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:22:34.470229 | orchestrator | 2025-05-31 16:22:34.470235 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-05-31 16:22:34.470240 | orchestrator | Saturday 31 May 2025 16:20:25 +0000 (0:00:03.516) 0:04:34.481 ********** 2025-05-31 16:22:34.470246 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-05-31 16:22:34.470256 | orchestrator | 2025-05-31 16:22:34.470262 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-05-31 16:22:34.470267 | orchestrator | Saturday 31 May 2025 16:20:27 +0000 (0:00:01.253) 0:04:35.735 ********** 2025-05-31 16:22:34.470273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-31 16:22:34.470279 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.470289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-31 16:22:34.470295 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.470301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-31 16:22:34.470307 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.470312 | orchestrator | 2025-05-31 16:22:34.470318 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-05-31 16:22:34.470323 | orchestrator | Saturday 31 May 2025 16:20:28 +0000 (0:00:01.569) 0:04:37.305 ********** 2025-05-31 16:22:34.470332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-31 16:22:34.470338 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.470343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-31 16:22:34.470349 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.470355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-31 16:22:34.470364 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.470369 | orchestrator | 2025-05-31 16:22:34.470375 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-05-31 16:22:34.470380 | orchestrator | Saturday 31 May 2025 16:20:30 +0000 (0:00:01.803) 0:04:39.108 ********** 2025-05-31 16:22:34.470385 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.470391 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.470396 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.470402 | orchestrator | 2025-05-31 16:22:34.470407 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-31 16:22:34.470413 | orchestrator | Saturday 31 May 2025 16:20:32 +0000 (0:00:01.966) 0:04:41.075 ********** 2025-05-31 16:22:34.470418 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:22:34.470424 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:22:34.470429 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:22:34.470434 | orchestrator | 2025-05-31 16:22:34.470439 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-31 16:22:34.470444 | orchestrator | Saturday 31 May 2025 16:20:35 +0000 (0:00:02.577) 0:04:43.652 ********** 2025-05-31 16:22:34.470448 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:22:34.470453 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:22:34.470458 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:22:34.470463 | orchestrator | 2025-05-31 16:22:34.470467 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-05-31 16:22:34.470472 | orchestrator | Saturday 31 May 2025 16:20:37 +0000 (0:00:02.640) 0:04:46.293 ********** 2025-05-31 16:22:34.470477 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-1, testbed-node-0, testbed-node-2 => (item=nova-serialproxy) 2025-05-31 16:22:34.470482 | orchestrator | 2025-05-31 16:22:34.470486 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-05-31 16:22:34.470491 | orchestrator | Saturday 31 May 2025 16:20:38 +0000 (0:00:00.991) 0:04:47.284 ********** 2025-05-31 16:22:34.470499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-31 16:22:34.470504 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.470509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-31 16:22:34.470513 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.470521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-31 16:22:34.470529 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.470534 | orchestrator | 2025-05-31 16:22:34.470539 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-05-31 16:22:34.470543 | orchestrator | Saturday 31 May 2025 16:20:39 +0000 (0:00:01.078) 0:04:48.362 ********** 2025-05-31 16:22:34.470548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-31 16:22:34.470553 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.470558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-31 16:22:34.470563 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.470568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-31 16:22:34.470573 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.470577 | orchestrator | 2025-05-31 16:22:34.470582 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-05-31 16:22:34.470587 | orchestrator | Saturday 31 May 2025 16:20:41 +0000 (0:00:01.328) 0:04:49.691 ********** 2025-05-31 16:22:34.470591 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.470596 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.470601 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.470606 | orchestrator | 2025-05-31 16:22:34.470610 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-31 16:22:34.470615 | orchestrator | Saturday 31 May 2025 16:20:42 +0000 (0:00:01.502) 0:04:51.193 ********** 2025-05-31 16:22:34.470620 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:22:34.470624 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:22:34.470629 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:22:34.470634 | orchestrator | 2025-05-31 16:22:34.470656 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-31 16:22:34.470662 | orchestrator | Saturday 31 May 2025 16:20:44 +0000 (0:00:02.369) 0:04:53.563 ********** 2025-05-31 16:22:34.470667 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:22:34.470672 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:22:34.470676 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:22:34.470681 | orchestrator | 2025-05-31 16:22:34.470686 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-05-31 16:22:34.470690 | orchestrator | Saturday 31 May 2025 16:20:48 +0000 (0:00:03.711) 0:04:57.275 ********** 2025-05-31 16:22:34.470695 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:22:34.470700 | orchestrator | 2025-05-31 16:22:34.470705 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-05-31 16:22:34.470713 | orchestrator | Saturday 31 May 2025 16:20:50 +0000 (0:00:01.663) 0:04:58.938 ********** 2025-05-31 16:22:34.470721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-31 16:22:34.470727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-31 16:22:34.470732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-31 16:22:34.470738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-31 16:22:34.470743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.470751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-31 16:22:34.470759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-31 16:22:34.470767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-31 16:22:34.470772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-31 16:22:34.470777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.470785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-31 16:22:34.470790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-31 16:22:34.470799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-31 16:22:34.470808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-31 16:22:34.470813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.470818 | orchestrator | 2025-05-31 16:22:34.470823 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-05-31 16:22:34.470828 | orchestrator | Saturday 31 May 2025 16:20:54 +0000 (0:00:04.292) 0:05:03.231 ********** 2025-05-31 16:22:34.470833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-31 16:22:34.470838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-31 16:22:34.470849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-31 16:22:34.470854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-31 16:22:34.470862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.470867 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.470872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-31 16:22:34.470877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-31 16:22:34.470883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-31 16:22:34.470891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-31 16:22:34.470897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.470902 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.470920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-31 16:22:34.470926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-31 16:22:34.470931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-31 16:22:34.470936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-31 16:22:34.470950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-31 16:22:34.470955 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.470960 | orchestrator | 2025-05-31 16:22:34.470964 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-05-31 16:22:34.470969 | orchestrator | Saturday 31 May 2025 16:20:55 +0000 (0:00:00.748) 0:05:03.979 ********** 2025-05-31 16:22:34.470974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-31 16:22:34.470979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-31 16:22:34.470984 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.470989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-31 16:22:34.470996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-31 16:22:34.471001 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.471042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-31 16:22:34.471047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-31 16:22:34.471052 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.471057 | orchestrator | 2025-05-31 16:22:34.471062 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-05-31 16:22:34.471067 | orchestrator | Saturday 31 May 2025 16:20:56 +0000 (0:00:01.040) 0:05:05.019 ********** 2025-05-31 16:22:34.471071 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:22:34.471076 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:22:34.471081 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:22:34.471086 | orchestrator | 2025-05-31 16:22:34.471090 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-05-31 16:22:34.471095 | orchestrator | Saturday 31 May 2025 16:20:57 +0000 (0:00:01.319) 0:05:06.339 ********** 2025-05-31 16:22:34.471100 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:22:34.471104 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:22:34.471109 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:22:34.471114 | orchestrator | 2025-05-31 16:22:34.471119 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-05-31 16:22:34.471123 | orchestrator | Saturday 31 May 2025 16:21:00 +0000 (0:00:02.296) 0:05:08.636 ********** 2025-05-31 16:22:34.471128 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:22:34.471138 | orchestrator | 2025-05-31 16:22:34.471143 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-05-31 16:22:34.471147 | orchestrator | Saturday 31 May 2025 16:21:01 +0000 (0:00:01.438) 0:05:10.075 ********** 2025-05-31 16:22:34.471153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-31 16:22:34.471162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-31 16:22:34.471170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-31 16:22:34.471176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-31 16:22:34.471182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-31 16:22:34.471196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-31 16:22:34.471201 | orchestrator | 2025-05-31 16:22:34.471206 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-05-31 16:22:34.471211 | orchestrator | Saturday 31 May 2025 16:21:07 +0000 (0:00:06.095) 0:05:16.170 ********** 2025-05-31 16:22:34.471218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-31 16:22:34.471224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-31 16:22:34.471234 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.471239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-31 16:22:34.471247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-31 16:22:34.471252 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.471257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-31 16:22:34.471265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-31 16:22:34.471274 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.471279 | orchestrator | 2025-05-31 16:22:34.471284 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-05-31 16:22:34.471288 | orchestrator | Saturday 31 May 2025 16:21:08 +0000 (0:00:00.851) 0:05:17.021 ********** 2025-05-31 16:22:34.471293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-31 16:22:34.471298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-31 16:22:34.471303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-31 16:22:34.471309 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.471313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-31 16:22:34.471318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-31 16:22:34.471323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-31 16:22:34.471331 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.471336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-31 16:22:34.471340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-31 16:22:34.471345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-31 16:22:34.471350 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.471355 | orchestrator | 2025-05-31 16:22:34.471360 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-05-31 16:22:34.471364 | orchestrator | Saturday 31 May 2025 16:21:09 +0000 (0:00:01.264) 0:05:18.286 ********** 2025-05-31 16:22:34.471369 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.471374 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.471378 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.471383 | orchestrator | 2025-05-31 16:22:34.471388 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-05-31 16:22:34.471393 | orchestrator | Saturday 31 May 2025 16:21:10 +0000 (0:00:00.691) 0:05:18.977 ********** 2025-05-31 16:22:34.471397 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.471402 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.471407 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.471412 | orchestrator | 2025-05-31 16:22:34.471423 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-05-31 16:22:34.471428 | orchestrator | Saturday 31 May 2025 16:21:12 +0000 (0:00:01.708) 0:05:20.686 ********** 2025-05-31 16:22:34.471432 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:22:34.471437 | orchestrator | 2025-05-31 16:22:34.471442 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-05-31 16:22:34.471446 | orchestrator | Saturday 31 May 2025 16:21:13 +0000 (0:00:01.796) 0:05:22.483 ********** 2025-05-31 16:22:34.471451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-31 16:22:34.471456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-31 16:22:34.471461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:22:34.471466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:22:34.471473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-31 16:22:34.471478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-31 16:22:34.471489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-31 16:22:34.471494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:22:34.471499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:22:34.471504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-31 16:22:34.471511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-31 16:22:34.471516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-31 16:22:34.471521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:22:34.471531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:22:34.471536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-31 16:22:34.471541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-31 16:22:34.471549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-31 16:22:34.471554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:22:34.471559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:22:34.471571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-31 16:22:34.471576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:22:34.471581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-31 16:22:34.471588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-31 16:22:34.471593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-31 16:22:34.471637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:22:34.471643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-31 16:22:34.471648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:22:34.471653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:22:34.471657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-31 16:22:34.471665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:22:34.471673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:22:34.471680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-31 16:22:34.471685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:22:34.471689 | orchestrator | 2025-05-31 16:22:34.471694 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-05-31 16:22:34.471698 | orchestrator | Saturday 31 May 2025 16:21:18 +0000 (0:00:04.731) 0:05:27.215 ********** 2025-05-31 16:22:34.471703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-31 16:22:34.471708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-31 16:22:34.471713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:22:34.471720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:22:34.471728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-31 16:22:34.471736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-31 16:22:34.471741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-31 16:22:34.471746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:22:34.471751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:22:34.471774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-31 16:22:34.471779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:22:34.471784 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.471791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-31 16:22:34.471796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-31 16:22:34.471801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:22:34.471806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:22:34.471811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-31 16:22:34.471822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-31 16:22:34.471829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-31 16:22:34.471834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:22:34.471839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:22:34.471844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-31 16:22:34.471848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:22:34.471856 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.471864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-31 16:22:34.471868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-31 16:22:34.471875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:22:34.471880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:22:34.471885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-31 16:22:34.471890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-31 16:22:34.471900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-31 16:22:34.471905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:22:34.471912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:22:34.471917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-31 16:22:34.471922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:22:34.471927 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.471931 | orchestrator | 2025-05-31 16:22:34.471936 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-05-31 16:22:34.471940 | orchestrator | Saturday 31 May 2025 16:21:19 +0000 (0:00:01.182) 0:05:28.398 ********** 2025-05-31 16:22:34.471945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-31 16:22:34.471950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-31 16:22:34.471960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-31 16:22:34.471965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-31 16:22:34.471970 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.471975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-31 16:22:34.471982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-31 16:22:34.471987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-31 16:22:34.471992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-31 16:22:34.471997 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.472015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-31 16:22:34.472021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-31 16:22:34.472028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-31 16:22:34.472033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-31 16:22:34.472038 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.472042 | orchestrator | 2025-05-31 16:22:34.472047 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-05-31 16:22:34.472051 | orchestrator | Saturday 31 May 2025 16:21:21 +0000 (0:00:01.514) 0:05:29.912 ********** 2025-05-31 16:22:34.472056 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.472060 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.472065 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.472069 | orchestrator | 2025-05-31 16:22:34.472074 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-05-31 16:22:34.472078 | orchestrator | Saturday 31 May 2025 16:21:22 +0000 (0:00:00.900) 0:05:30.812 ********** 2025-05-31 16:22:34.472083 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.472091 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.472095 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.472100 | orchestrator | 2025-05-31 16:22:34.472104 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-05-31 16:22:34.472109 | orchestrator | Saturday 31 May 2025 16:21:23 +0000 (0:00:01.608) 0:05:32.421 ********** 2025-05-31 16:22:34.472113 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:22:34.472117 | orchestrator | 2025-05-31 16:22:34.472122 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-05-31 16:22:34.472126 | orchestrator | Saturday 31 May 2025 16:21:25 +0000 (0:00:01.601) 0:05:34.022 ********** 2025-05-31 16:22:34.472131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-31 16:22:34.472139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-31 16:22:34.472146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-31 16:22:34.472151 | orchestrator | 2025-05-31 16:22:34.472156 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-05-31 16:22:34.472160 | orchestrator | Saturday 31 May 2025 16:21:28 +0000 (0:00:02.795) 0:05:36.817 ********** 2025-05-31 16:22:34.472168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-31 16:22:34.472173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-31 16:22:34.472178 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.472183 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.472190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-31 16:22:34.472195 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.472199 | orchestrator | 2025-05-31 16:22:34.472204 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-05-31 16:22:34.472208 | orchestrator | Saturday 31 May 2025 16:21:28 +0000 (0:00:00.648) 0:05:37.465 ********** 2025-05-31 16:22:34.472213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-31 16:22:34.472217 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.472224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-31 16:22:34.472229 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.472233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-31 16:22:34.472241 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.472246 | orchestrator | 2025-05-31 16:22:34.472250 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-05-31 16:22:34.472255 | orchestrator | Saturday 31 May 2025 16:21:29 +0000 (0:00:00.803) 0:05:38.269 ********** 2025-05-31 16:22:34.472259 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.472263 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.472268 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.472272 | orchestrator | 2025-05-31 16:22:34.472277 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-05-31 16:22:34.472281 | orchestrator | Saturday 31 May 2025 16:21:30 +0000 (0:00:00.692) 0:05:38.962 ********** 2025-05-31 16:22:34.472286 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.472290 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.472295 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.472299 | orchestrator | 2025-05-31 16:22:34.472304 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-05-31 16:22:34.472308 | orchestrator | Saturday 31 May 2025 16:21:32 +0000 (0:00:01.720) 0:05:40.682 ********** 2025-05-31 16:22:34.472312 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:22:34.472317 | orchestrator | 2025-05-31 16:22:34.472321 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-05-31 16:22:34.472326 | orchestrator | Saturday 31 May 2025 16:21:33 +0000 (0:00:01.872) 0:05:42.555 ********** 2025-05-31 16:22:34.472331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-31 16:22:34.472338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-31 16:22:34.472346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-31 16:22:34.472355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-31 16:22:34.472360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-31 16:22:34.472367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-31 16:22:34.472372 | orchestrator | 2025-05-31 16:22:34.472376 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-05-31 16:22:34.472381 | orchestrator | Saturday 31 May 2025 16:21:41 +0000 (0:00:07.786) 0:05:50.342 ********** 2025-05-31 16:22:34.472386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-31 16:22:34.472397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-31 16:22:34.472401 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.472406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-31 16:22:34.472411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-31 16:22:34.472416 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.472423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-31 16:22:34.472434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-31 16:22:34.472439 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.472443 | orchestrator | 2025-05-31 16:22:34.472448 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-05-31 16:22:34.472452 | orchestrator | Saturday 31 May 2025 16:21:42 +0000 (0:00:00.887) 0:05:51.229 ********** 2025-05-31 16:22:34.472457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-31 16:22:34.472461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-31 16:22:34.472466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-31 16:22:34.472471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-31 16:22:34.472475 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.472480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-31 16:22:34.472485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-31 16:22:34.472489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-31 16:22:34.472494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-31 16:22:34.472498 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.472505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-31 16:22:34.472513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-31 16:22:34.472518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-31 16:22:34.472522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-31 16:22:34.472527 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.472531 | orchestrator | 2025-05-31 16:22:34.472536 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-05-31 16:22:34.472540 | orchestrator | Saturday 31 May 2025 16:21:44 +0000 (0:00:01.721) 0:05:52.951 ********** 2025-05-31 16:22:34.472545 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:22:34.472549 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:22:34.472554 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:22:34.472558 | orchestrator | 2025-05-31 16:22:34.472562 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-05-31 16:22:34.472567 | orchestrator | Saturday 31 May 2025 16:21:45 +0000 (0:00:01.384) 0:05:54.336 ********** 2025-05-31 16:22:34.472571 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:22:34.472576 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:22:34.472580 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:22:34.472585 | orchestrator | 2025-05-31 16:22:34.472591 | orchestrator | TASK [include_role : swift] **************************************************** 2025-05-31 16:22:34.472596 | orchestrator | Saturday 31 May 2025 16:21:48 +0000 (0:00:02.398) 0:05:56.734 ********** 2025-05-31 16:22:34.472601 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.472605 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.472610 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.472614 | orchestrator | 2025-05-31 16:22:34.472619 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-05-31 16:22:34.472623 | orchestrator | Saturday 31 May 2025 16:21:48 +0000 (0:00:00.305) 0:05:57.040 ********** 2025-05-31 16:22:34.472628 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.472632 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.472637 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.472641 | orchestrator | 2025-05-31 16:22:34.472645 | orchestrator | TASK [include_role : trove] **************************************************** 2025-05-31 16:22:34.472650 | orchestrator | Saturday 31 May 2025 16:21:48 +0000 (0:00:00.544) 0:05:57.585 ********** 2025-05-31 16:22:34.472654 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.472659 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.472663 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.472668 | orchestrator | 2025-05-31 16:22:34.472672 | orchestrator | TASK [include_role : venus] **************************************************** 2025-05-31 16:22:34.472676 | orchestrator | Saturday 31 May 2025 16:21:49 +0000 (0:00:00.509) 0:05:58.094 ********** 2025-05-31 16:22:34.472681 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.472685 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.472690 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.472694 | orchestrator | 2025-05-31 16:22:34.472699 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-05-31 16:22:34.472703 | orchestrator | Saturday 31 May 2025 16:21:49 +0000 (0:00:00.291) 0:05:58.386 ********** 2025-05-31 16:22:34.472708 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.472712 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.472717 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.472726 | orchestrator | 2025-05-31 16:22:34.472731 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-05-31 16:22:34.472735 | orchestrator | Saturday 31 May 2025 16:21:50 +0000 (0:00:00.515) 0:05:58.902 ********** 2025-05-31 16:22:34.472740 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.472744 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.472748 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.472753 | orchestrator | 2025-05-31 16:22:34.472757 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-05-31 16:22:34.472762 | orchestrator | Saturday 31 May 2025 16:21:51 +0000 (0:00:00.928) 0:05:59.831 ********** 2025-05-31 16:22:34.472766 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:22:34.472771 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:22:34.472775 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:22:34.472780 | orchestrator | 2025-05-31 16:22:34.472784 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-05-31 16:22:34.472789 | orchestrator | Saturday 31 May 2025 16:21:51 +0000 (0:00:00.677) 0:06:00.508 ********** 2025-05-31 16:22:34.472793 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:22:34.472798 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:22:34.472802 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:22:34.472806 | orchestrator | 2025-05-31 16:22:34.472811 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-05-31 16:22:34.472815 | orchestrator | Saturday 31 May 2025 16:21:52 +0000 (0:00:00.566) 0:06:01.074 ********** 2025-05-31 16:22:34.472820 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:22:34.472824 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:22:34.472829 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:22:34.472833 | orchestrator | 2025-05-31 16:22:34.472837 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-05-31 16:22:34.472842 | orchestrator | Saturday 31 May 2025 16:21:53 +0000 (0:00:01.297) 0:06:02.372 ********** 2025-05-31 16:22:34.472846 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:22:34.472851 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:22:34.472855 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:22:34.472860 | orchestrator | 2025-05-31 16:22:34.472867 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-05-31 16:22:34.472871 | orchestrator | Saturday 31 May 2025 16:21:55 +0000 (0:00:01.257) 0:06:03.629 ********** 2025-05-31 16:22:34.472876 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:22:34.472880 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:22:34.472884 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:22:34.472889 | orchestrator | 2025-05-31 16:22:34.472893 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-05-31 16:22:34.472898 | orchestrator | Saturday 31 May 2025 16:21:56 +0000 (0:00:00.999) 0:06:04.628 ********** 2025-05-31 16:22:34.472902 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:22:34.472907 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:22:34.472911 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:22:34.472915 | orchestrator | 2025-05-31 16:22:34.472920 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-05-31 16:22:34.472924 | orchestrator | Saturday 31 May 2025 16:22:00 +0000 (0:00:04.803) 0:06:09.432 ********** 2025-05-31 16:22:34.472929 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:22:34.472933 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:22:34.472938 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:22:34.472942 | orchestrator | 2025-05-31 16:22:34.472946 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-05-31 16:22:34.472951 | orchestrator | Saturday 31 May 2025 16:22:03 +0000 (0:00:03.081) 0:06:12.513 ********** 2025-05-31 16:22:34.472955 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:22:34.472960 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:22:34.472964 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:22:34.472968 | orchestrator | 2025-05-31 16:22:34.472973 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-05-31 16:22:34.472980 | orchestrator | Saturday 31 May 2025 16:22:14 +0000 (0:00:11.009) 0:06:23.522 ********** 2025-05-31 16:22:34.472985 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:22:34.472989 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:22:34.472994 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:22:34.472998 | orchestrator | 2025-05-31 16:22:34.473014 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-05-31 16:22:34.473022 | orchestrator | Saturday 31 May 2025 16:22:15 +0000 (0:00:01.065) 0:06:24.588 ********** 2025-05-31 16:22:34.473027 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:22:34.473031 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:22:34.473036 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:22:34.473040 | orchestrator | 2025-05-31 16:22:34.473045 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-05-31 16:22:34.473049 | orchestrator | Saturday 31 May 2025 16:22:25 +0000 (0:00:09.685) 0:06:34.273 ********** 2025-05-31 16:22:34.473054 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.473058 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.473062 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.473067 | orchestrator | 2025-05-31 16:22:34.473071 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-05-31 16:22:34.473076 | orchestrator | Saturday 31 May 2025 16:22:26 +0000 (0:00:00.566) 0:06:34.840 ********** 2025-05-31 16:22:34.473080 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.473085 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.473089 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.473093 | orchestrator | 2025-05-31 16:22:34.473098 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-05-31 16:22:34.473102 | orchestrator | Saturday 31 May 2025 16:22:26 +0000 (0:00:00.325) 0:06:35.165 ********** 2025-05-31 16:22:34.473107 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.473111 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.473115 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.473120 | orchestrator | 2025-05-31 16:22:34.473124 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-05-31 16:22:34.473129 | orchestrator | Saturday 31 May 2025 16:22:27 +0000 (0:00:00.575) 0:06:35.740 ********** 2025-05-31 16:22:34.473133 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.473138 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.473142 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.473147 | orchestrator | 2025-05-31 16:22:34.473151 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-05-31 16:22:34.473155 | orchestrator | Saturday 31 May 2025 16:22:27 +0000 (0:00:00.587) 0:06:36.328 ********** 2025-05-31 16:22:34.473160 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.473164 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.473169 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.473173 | orchestrator | 2025-05-31 16:22:34.473177 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-05-31 16:22:34.473182 | orchestrator | Saturday 31 May 2025 16:22:28 +0000 (0:00:00.576) 0:06:36.904 ********** 2025-05-31 16:22:34.473186 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:22:34.473191 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:22:34.473195 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:22:34.473199 | orchestrator | 2025-05-31 16:22:34.473204 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-05-31 16:22:34.473208 | orchestrator | Saturday 31 May 2025 16:22:28 +0000 (0:00:00.318) 0:06:37.222 ********** 2025-05-31 16:22:34.473213 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:22:34.473217 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:22:34.473222 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:22:34.473226 | orchestrator | 2025-05-31 16:22:34.473231 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-05-31 16:22:34.473235 | orchestrator | Saturday 31 May 2025 16:22:29 +0000 (0:00:01.184) 0:06:38.407 ********** 2025-05-31 16:22:34.473243 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:22:34.473248 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:22:34.473252 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:22:34.473256 | orchestrator | 2025-05-31 16:22:34.473261 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:22:34.473265 | orchestrator | testbed-node-0 : ok=127  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-05-31 16:22:34.473273 | orchestrator | testbed-node-1 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-05-31 16:22:34.473278 | orchestrator | testbed-node-2 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-05-31 16:22:34.473282 | orchestrator | 2025-05-31 16:22:34.473287 | orchestrator | 2025-05-31 16:22:34.473291 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 16:22:34.473296 | orchestrator | Saturday 31 May 2025 16:22:30 +0000 (0:00:01.160) 0:06:39.567 ********** 2025-05-31 16:22:34.473300 | orchestrator | =============================================================================== 2025-05-31 16:22:34.473304 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 11.01s 2025-05-31 16:22:34.473309 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.69s 2025-05-31 16:22:34.473313 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 7.79s 2025-05-31 16:22:34.473318 | orchestrator | haproxy-config : Copying over heat haproxy config ----------------------- 6.44s 2025-05-31 16:22:34.473322 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 6.10s 2025-05-31 16:22:34.473327 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.45s 2025-05-31 16:22:34.473331 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 5.03s 2025-05-31 16:22:34.473335 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.95s 2025-05-31 16:22:34.473340 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.85s 2025-05-31 16:22:34.473344 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.80s 2025-05-31 16:22:34.473349 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.73s 2025-05-31 16:22:34.473355 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.53s 2025-05-31 16:22:34.473360 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 4.41s 2025-05-31 16:22:34.473364 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 4.29s 2025-05-31 16:22:34.473369 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 4.23s 2025-05-31 16:22:34.473373 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 4.14s 2025-05-31 16:22:34.473377 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.12s 2025-05-31 16:22:34.473382 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.04s 2025-05-31 16:22:34.473386 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 3.97s 2025-05-31 16:22:34.473390 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 3.92s 2025-05-31 16:22:34.473395 | orchestrator | 2025-05-31 16:22:34 | INFO  | Task 05ca3735-92b2-406d-89f5-f69e6385ae74 is in state STARTED 2025-05-31 16:22:34.473399 | orchestrator | 2025-05-31 16:22:34 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:22:37.505546 | orchestrator | 2025-05-31 16:22:37 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:22:37.505955 | orchestrator | 2025-05-31 16:22:37 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:22:37.509348 | orchestrator | 2025-05-31 16:22:37 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:22:37.510112 | orchestrator | 2025-05-31 16:22:37 | INFO  | Task 05ca3735-92b2-406d-89f5-f69e6385ae74 is in state STARTED 2025-05-31 16:22:37.510144 | orchestrator | 2025-05-31 16:22:37 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:22:40.553117 | orchestrator | 2025-05-31 16:22:40 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:22:40.553233 | orchestrator | 2025-05-31 16:22:40 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:22:40.553258 | orchestrator | 2025-05-31 16:22:40 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:22:40.553494 | orchestrator | 2025-05-31 16:22:40 | INFO  | Task 05ca3735-92b2-406d-89f5-f69e6385ae74 is in state STARTED 2025-05-31 16:22:40.553530 | orchestrator | 2025-05-31 16:22:40 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:22:43.602240 | orchestrator | 2025-05-31 16:22:43 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:22:43.602322 | orchestrator | 2025-05-31 16:22:43 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:22:43.602336 | orchestrator | 2025-05-31 16:22:43 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:22:43.602349 | orchestrator | 2025-05-31 16:22:43 | INFO  | Task 05ca3735-92b2-406d-89f5-f69e6385ae74 is in state STARTED 2025-05-31 16:22:43.602360 | orchestrator | 2025-05-31 16:22:43 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:22:46.629793 | orchestrator | 2025-05-31 16:22:46 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:22:46.630127 | orchestrator | 2025-05-31 16:22:46 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:22:46.631266 | orchestrator | 2025-05-31 16:22:46 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:22:46.631298 | orchestrator | 2025-05-31 16:22:46 | INFO  | Task 05ca3735-92b2-406d-89f5-f69e6385ae74 is in state STARTED 2025-05-31 16:22:46.631316 | orchestrator | 2025-05-31 16:22:46 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:22:49.663017 | orchestrator | 2025-05-31 16:22:49 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:22:49.664533 | orchestrator | 2025-05-31 16:22:49 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:22:49.666609 | orchestrator | 2025-05-31 16:22:49 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:22:49.668130 | orchestrator | 2025-05-31 16:22:49 | INFO  | Task 05ca3735-92b2-406d-89f5-f69e6385ae74 is in state STARTED 2025-05-31 16:22:49.668158 | orchestrator | 2025-05-31 16:22:49 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:22:52.698118 | orchestrator | 2025-05-31 16:22:52 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:22:52.699559 | orchestrator | 2025-05-31 16:22:52 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:22:52.699590 | orchestrator | 2025-05-31 16:22:52 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:22:52.699601 | orchestrator | 2025-05-31 16:22:52 | INFO  | Task 05ca3735-92b2-406d-89f5-f69e6385ae74 is in state STARTED 2025-05-31 16:22:52.699613 | orchestrator | 2025-05-31 16:22:52 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:22:55.747015 | orchestrator | 2025-05-31 16:22:55 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:22:55.748013 | orchestrator | 2025-05-31 16:22:55 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:22:55.749051 | orchestrator | 2025-05-31 16:22:55 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:22:55.750188 | orchestrator | 2025-05-31 16:22:55 | INFO  | Task 262f3d36-8a90-4319-95ae-ae4a2f05baa7 is in state STARTED 2025-05-31 16:22:55.751127 | orchestrator | 2025-05-31 16:22:55 | INFO  | Task 05ca3735-92b2-406d-89f5-f69e6385ae74 is in state STARTED 2025-05-31 16:22:55.751166 | orchestrator | 2025-05-31 16:22:55 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:22:58.779733 | orchestrator | 2025-05-31 16:22:58 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:22:58.780087 | orchestrator | 2025-05-31 16:22:58 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:22:58.780469 | orchestrator | 2025-05-31 16:22:58 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:22:58.781065 | orchestrator | 2025-05-31 16:22:58 | INFO  | Task 262f3d36-8a90-4319-95ae-ae4a2f05baa7 is in state STARTED 2025-05-31 16:22:58.781638 | orchestrator | 2025-05-31 16:22:58 | INFO  | Task 05ca3735-92b2-406d-89f5-f69e6385ae74 is in state STARTED 2025-05-31 16:22:58.781660 | orchestrator | 2025-05-31 16:22:58 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:23:01.816121 | orchestrator | 2025-05-31 16:23:01 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:23:01.816203 | orchestrator | 2025-05-31 16:23:01 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:23:01.816216 | orchestrator | 2025-05-31 16:23:01 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:23:01.817146 | orchestrator | 2025-05-31 16:23:01 | INFO  | Task 262f3d36-8a90-4319-95ae-ae4a2f05baa7 is in state STARTED 2025-05-31 16:23:01.817173 | orchestrator | 2025-05-31 16:23:01 | INFO  | Task 05ca3735-92b2-406d-89f5-f69e6385ae74 is in state STARTED 2025-05-31 16:23:01.820298 | orchestrator | 2025-05-31 16:23:01 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:23:04.856137 | orchestrator | 2025-05-31 16:23:04 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:23:04.857577 | orchestrator | 2025-05-31 16:23:04 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:23:04.860230 | orchestrator | 2025-05-31 16:23:04 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:23:04.860746 | orchestrator | 2025-05-31 16:23:04 | INFO  | Task 262f3d36-8a90-4319-95ae-ae4a2f05baa7 is in state SUCCESS 2025-05-31 16:23:04.861909 | orchestrator | 2025-05-31 16:23:04 | INFO  | Task 05ca3735-92b2-406d-89f5-f69e6385ae74 is in state STARTED 2025-05-31 16:23:04.861950 | orchestrator | 2025-05-31 16:23:04 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:23:07.903409 | orchestrator | 2025-05-31 16:23:07 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:23:07.904329 | orchestrator | 2025-05-31 16:23:07 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:23:07.905183 | orchestrator | 2025-05-31 16:23:07 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:23:07.906492 | orchestrator | 2025-05-31 16:23:07 | INFO  | Task 05ca3735-92b2-406d-89f5-f69e6385ae74 is in state STARTED 2025-05-31 16:23:07.906518 | orchestrator | 2025-05-31 16:23:07 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:23:10.961442 | orchestrator | 2025-05-31 16:23:10 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:23:10.964672 | orchestrator | 2025-05-31 16:23:10 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:23:10.966198 | orchestrator | 2025-05-31 16:23:10 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:23:10.968662 | orchestrator | 2025-05-31 16:23:10 | INFO  | Task 05ca3735-92b2-406d-89f5-f69e6385ae74 is in state STARTED 2025-05-31 16:23:10.968751 | orchestrator | 2025-05-31 16:23:10 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:23:14.026463 | orchestrator | 2025-05-31 16:23:14 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:23:14.028288 | orchestrator | 2025-05-31 16:23:14 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:23:14.030276 | orchestrator | 2025-05-31 16:23:14 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:23:14.031724 | orchestrator | 2025-05-31 16:23:14 | INFO  | Task 05ca3735-92b2-406d-89f5-f69e6385ae74 is in state STARTED 2025-05-31 16:23:14.032032 | orchestrator | 2025-05-31 16:23:14 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:23:17.104325 | orchestrator | 2025-05-31 16:23:17 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:23:17.105676 | orchestrator | 2025-05-31 16:23:17 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:23:17.107683 | orchestrator | 2025-05-31 16:23:17 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:23:17.109711 | orchestrator | 2025-05-31 16:23:17 | INFO  | Task 05ca3735-92b2-406d-89f5-f69e6385ae74 is in state STARTED 2025-05-31 16:23:17.109735 | orchestrator | 2025-05-31 16:23:17 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:23:20.179523 | orchestrator | 2025-05-31 16:23:20 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:23:20.181460 | orchestrator | 2025-05-31 16:23:20 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:23:20.181749 | orchestrator | 2025-05-31 16:23:20 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:23:20.184331 | orchestrator | 2025-05-31 16:23:20 | INFO  | Task 05ca3735-92b2-406d-89f5-f69e6385ae74 is in state STARTED 2025-05-31 16:23:20.184642 | orchestrator | 2025-05-31 16:23:20 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:23:23.250134 | orchestrator | 2025-05-31 16:23:23 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:23:23.250227 | orchestrator | 2025-05-31 16:23:23 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:23:23.252191 | orchestrator | 2025-05-31 16:23:23 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:23:23.253635 | orchestrator | 2025-05-31 16:23:23 | INFO  | Task 05ca3735-92b2-406d-89f5-f69e6385ae74 is in state STARTED 2025-05-31 16:23:23.253666 | orchestrator | 2025-05-31 16:23:23 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:23:26.302716 | orchestrator | 2025-05-31 16:23:26 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:23:26.303916 | orchestrator | 2025-05-31 16:23:26 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:23:26.304629 | orchestrator | 2025-05-31 16:23:26 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:23:26.306154 | orchestrator | 2025-05-31 16:23:26 | INFO  | Task 05ca3735-92b2-406d-89f5-f69e6385ae74 is in state STARTED 2025-05-31 16:23:26.306224 | orchestrator | 2025-05-31 16:23:26 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:23:29.365742 | orchestrator | 2025-05-31 16:23:29 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:23:29.368089 | orchestrator | 2025-05-31 16:23:29 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:23:29.371149 | orchestrator | 2025-05-31 16:23:29 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:23:29.373664 | orchestrator | 2025-05-31 16:23:29 | INFO  | Task 05ca3735-92b2-406d-89f5-f69e6385ae74 is in state STARTED 2025-05-31 16:23:29.373841 | orchestrator | 2025-05-31 16:23:29 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:23:32.410862 | orchestrator | 2025-05-31 16:23:32 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:23:32.411288 | orchestrator | 2025-05-31 16:23:32 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:23:32.412922 | orchestrator | 2025-05-31 16:23:32 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:23:32.414145 | orchestrator | 2025-05-31 16:23:32 | INFO  | Task 05ca3735-92b2-406d-89f5-f69e6385ae74 is in state STARTED 2025-05-31 16:23:32.414724 | orchestrator | 2025-05-31 16:23:32 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:23:35.464848 | orchestrator | 2025-05-31 16:23:35 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:23:35.467181 | orchestrator | 2025-05-31 16:23:35 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:23:35.469690 | orchestrator | 2025-05-31 16:23:35 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:23:35.471514 | orchestrator | 2025-05-31 16:23:35 | INFO  | Task 05ca3735-92b2-406d-89f5-f69e6385ae74 is in state STARTED 2025-05-31 16:23:35.471848 | orchestrator | 2025-05-31 16:23:35 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:23:38.521331 | orchestrator | 2025-05-31 16:23:38 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:23:38.521411 | orchestrator | 2025-05-31 16:23:38 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:23:38.522234 | orchestrator | 2025-05-31 16:23:38 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:23:38.523380 | orchestrator | 2025-05-31 16:23:38 | INFO  | Task 05ca3735-92b2-406d-89f5-f69e6385ae74 is in state STARTED 2025-05-31 16:23:38.523418 | orchestrator | 2025-05-31 16:23:38 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:23:41.569717 | orchestrator | 2025-05-31 16:23:41 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:23:41.572209 | orchestrator | 2025-05-31 16:23:41 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:23:41.572241 | orchestrator | 2025-05-31 16:23:41 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:23:41.572811 | orchestrator | 2025-05-31 16:23:41 | INFO  | Task 05ca3735-92b2-406d-89f5-f69e6385ae74 is in state STARTED 2025-05-31 16:23:41.573085 | orchestrator | 2025-05-31 16:23:41 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:23:44.621735 | orchestrator | 2025-05-31 16:23:44 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:23:44.622876 | orchestrator | 2025-05-31 16:23:44 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:23:44.624365 | orchestrator | 2025-05-31 16:23:44 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:23:44.625501 | orchestrator | 2025-05-31 16:23:44 | INFO  | Task 05ca3735-92b2-406d-89f5-f69e6385ae74 is in state STARTED 2025-05-31 16:23:44.625607 | orchestrator | 2025-05-31 16:23:44 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:23:47.696624 | orchestrator | 2025-05-31 16:23:47 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:23:47.698386 | orchestrator | 2025-05-31 16:23:47 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:23:47.699620 | orchestrator | 2025-05-31 16:23:47 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:23:47.701040 | orchestrator | 2025-05-31 16:23:47 | INFO  | Task 05ca3735-92b2-406d-89f5-f69e6385ae74 is in state STARTED 2025-05-31 16:23:47.701071 | orchestrator | 2025-05-31 16:23:47 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:23:50.742988 | orchestrator | 2025-05-31 16:23:50 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:23:50.743038 | orchestrator | 2025-05-31 16:23:50 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:23:50.745030 | orchestrator | 2025-05-31 16:23:50 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:23:50.745085 | orchestrator | 2025-05-31 16:23:50 | INFO  | Task 05ca3735-92b2-406d-89f5-f69e6385ae74 is in state STARTED 2025-05-31 16:23:50.745288 | orchestrator | 2025-05-31 16:23:50 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:23:53.793777 | orchestrator | 2025-05-31 16:23:53 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:23:53.794148 | orchestrator | 2025-05-31 16:23:53 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:23:53.794776 | orchestrator | 2025-05-31 16:23:53 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:23:53.795351 | orchestrator | 2025-05-31 16:23:53 | INFO  | Task 05ca3735-92b2-406d-89f5-f69e6385ae74 is in state STARTED 2025-05-31 16:23:53.795374 | orchestrator | 2025-05-31 16:23:53 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:23:56.836883 | orchestrator | 2025-05-31 16:23:56 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:23:56.838140 | orchestrator | 2025-05-31 16:23:56 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:23:56.840549 | orchestrator | 2025-05-31 16:23:56 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:23:56.841821 | orchestrator | 2025-05-31 16:23:56 | INFO  | Task 05ca3735-92b2-406d-89f5-f69e6385ae74 is in state STARTED 2025-05-31 16:23:56.841915 | orchestrator | 2025-05-31 16:23:56 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:23:59.899397 | orchestrator | 2025-05-31 16:23:59 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:23:59.900096 | orchestrator | 2025-05-31 16:23:59 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:23:59.901555 | orchestrator | 2025-05-31 16:23:59 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:23:59.902818 | orchestrator | 2025-05-31 16:23:59 | INFO  | Task 05ca3735-92b2-406d-89f5-f69e6385ae74 is in state STARTED 2025-05-31 16:23:59.902920 | orchestrator | 2025-05-31 16:23:59 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:24:02.946764 | orchestrator | 2025-05-31 16:24:02 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:24:02.947452 | orchestrator | 2025-05-31 16:24:02 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:24:02.948685 | orchestrator | 2025-05-31 16:24:02 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:24:02.950279 | orchestrator | 2025-05-31 16:24:02 | INFO  | Task 05ca3735-92b2-406d-89f5-f69e6385ae74 is in state STARTED 2025-05-31 16:24:02.950308 | orchestrator | 2025-05-31 16:24:02 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:24:06.002215 | orchestrator | 2025-05-31 16:24:05 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:24:06.009426 | orchestrator | 2025-05-31 16:24:06 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:24:06.009506 | orchestrator | 2025-05-31 16:24:06 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:24:06.012739 | orchestrator | 2025-05-31 16:24:06 | INFO  | Task 05ca3735-92b2-406d-89f5-f69e6385ae74 is in state STARTED 2025-05-31 16:24:06.012781 | orchestrator | 2025-05-31 16:24:06 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:24:09.064758 | orchestrator | 2025-05-31 16:24:09 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:24:09.069722 | orchestrator | 2025-05-31 16:24:09 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:24:09.072520 | orchestrator | 2025-05-31 16:24:09 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:24:09.074716 | orchestrator | 2025-05-31 16:24:09 | INFO  | Task 05ca3735-92b2-406d-89f5-f69e6385ae74 is in state STARTED 2025-05-31 16:24:09.075138 | orchestrator | 2025-05-31 16:24:09 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:24:12.119701 | orchestrator | 2025-05-31 16:24:12 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:24:12.119791 | orchestrator | 2025-05-31 16:24:12 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:24:12.121969 | orchestrator | 2025-05-31 16:24:12 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:24:12.125480 | orchestrator | 2025-05-31 16:24:12 | INFO  | Task 05ca3735-92b2-406d-89f5-f69e6385ae74 is in state STARTED 2025-05-31 16:24:12.125593 | orchestrator | 2025-05-31 16:24:12 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:24:15.178137 | orchestrator | 2025-05-31 16:24:15 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:24:15.179414 | orchestrator | 2025-05-31 16:24:15 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:24:15.181185 | orchestrator | 2025-05-31 16:24:15 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:24:15.183230 | orchestrator | 2025-05-31 16:24:15 | INFO  | Task 05ca3735-92b2-406d-89f5-f69e6385ae74 is in state STARTED 2025-05-31 16:24:15.183268 | orchestrator | 2025-05-31 16:24:15 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:24:18.235181 | orchestrator | 2025-05-31 16:24:18 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:24:18.237311 | orchestrator | 2025-05-31 16:24:18 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:24:18.240452 | orchestrator | 2025-05-31 16:24:18 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:24:18.247127 | orchestrator | 2025-05-31 16:24:18 | INFO  | Task 05ca3735-92b2-406d-89f5-f69e6385ae74 is in state STARTED 2025-05-31 16:24:18.247201 | orchestrator | 2025-05-31 16:24:18 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:24:21.296013 | orchestrator | 2025-05-31 16:24:21 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:24:21.297194 | orchestrator | 2025-05-31 16:24:21 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:24:21.299684 | orchestrator | 2025-05-31 16:24:21 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:24:21.300844 | orchestrator | 2025-05-31 16:24:21 | INFO  | Task 05ca3735-92b2-406d-89f5-f69e6385ae74 is in state STARTED 2025-05-31 16:24:21.300869 | orchestrator | 2025-05-31 16:24:21 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:24:24.351635 | orchestrator | 2025-05-31 16:24:24 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:24:24.353986 | orchestrator | 2025-05-31 16:24:24 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:24:24.357063 | orchestrator | 2025-05-31 16:24:24 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:24:24.359011 | orchestrator | 2025-05-31 16:24:24 | INFO  | Task 05ca3735-92b2-406d-89f5-f69e6385ae74 is in state STARTED 2025-05-31 16:24:24.359592 | orchestrator | 2025-05-31 16:24:24 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:24:27.401532 | orchestrator | 2025-05-31 16:24:27 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:24:27.402612 | orchestrator | 2025-05-31 16:24:27 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:24:27.404065 | orchestrator | 2025-05-31 16:24:27 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:24:27.405288 | orchestrator | 2025-05-31 16:24:27 | INFO  | Task 05ca3735-92b2-406d-89f5-f69e6385ae74 is in state STARTED 2025-05-31 16:24:27.405315 | orchestrator | 2025-05-31 16:24:27 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:24:30.458885 | orchestrator | 2025-05-31 16:24:30 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:24:30.463542 | orchestrator | 2025-05-31 16:24:30 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:24:30.466394 | orchestrator | 2025-05-31 16:24:30 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:24:30.469137 | orchestrator | 2025-05-31 16:24:30 | INFO  | Task 05ca3735-92b2-406d-89f5-f69e6385ae74 is in state STARTED 2025-05-31 16:24:30.469174 | orchestrator | 2025-05-31 16:24:30 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:24:33.518262 | orchestrator | 2025-05-31 16:24:33 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:24:33.518576 | orchestrator | 2025-05-31 16:24:33 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:24:33.519478 | orchestrator | 2025-05-31 16:24:33 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:24:33.520595 | orchestrator | 2025-05-31 16:24:33 | INFO  | Task 05ca3735-92b2-406d-89f5-f69e6385ae74 is in state STARTED 2025-05-31 16:24:33.520614 | orchestrator | 2025-05-31 16:24:33 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:24:36.567854 | orchestrator | 2025-05-31 16:24:36 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:24:36.568626 | orchestrator | 2025-05-31 16:24:36 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:24:36.569797 | orchestrator | 2025-05-31 16:24:36 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:24:36.571766 | orchestrator | 2025-05-31 16:24:36 | INFO  | Task 05ca3735-92b2-406d-89f5-f69e6385ae74 is in state SUCCESS 2025-05-31 16:24:36.572078 | orchestrator | 2025-05-31 16:24:36 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:24:36.573776 | orchestrator | 2025-05-31 16:24:36.573809 | orchestrator | None 2025-05-31 16:24:36.573820 | orchestrator | 2025-05-31 16:24:36.573832 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-31 16:24:36.573843 | orchestrator | 2025-05-31 16:24:36.573854 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-31 16:24:36.573865 | orchestrator | Saturday 31 May 2025 16:22:34 +0000 (0:00:00.354) 0:00:00.354 ********** 2025-05-31 16:24:36.573875 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:24:36.573952 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:24:36.573965 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:24:36.573976 | orchestrator | 2025-05-31 16:24:36.573987 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-31 16:24:36.573997 | orchestrator | Saturday 31 May 2025 16:22:35 +0000 (0:00:00.397) 0:00:00.752 ********** 2025-05-31 16:24:36.574085 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-05-31 16:24:36.574114 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-05-31 16:24:36.574126 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-05-31 16:24:36.574137 | orchestrator | 2025-05-31 16:24:36.574147 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-05-31 16:24:36.574159 | orchestrator | 2025-05-31 16:24:36.574170 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-31 16:24:36.574181 | orchestrator | Saturday 31 May 2025 16:22:35 +0000 (0:00:00.304) 0:00:01.056 ********** 2025-05-31 16:24:36.574192 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:24:36.574203 | orchestrator | 2025-05-31 16:24:36.574319 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-05-31 16:24:36.574334 | orchestrator | Saturday 31 May 2025 16:22:36 +0000 (0:00:00.703) 0:00:01.760 ********** 2025-05-31 16:24:36.574345 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-31 16:24:36.574356 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-31 16:24:36.574367 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-31 16:24:36.574378 | orchestrator | 2025-05-31 16:24:36.574391 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-05-31 16:24:36.574403 | orchestrator | Saturday 31 May 2025 16:22:36 +0000 (0:00:00.773) 0:00:02.533 ********** 2025-05-31 16:24:36.574419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-31 16:24:36.574441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-31 16:24:36.574498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-31 16:24:36.574515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-31 16:24:36.574531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-31 16:24:36.574551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-31 16:24:36.574571 | orchestrator | 2025-05-31 16:24:36.574584 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-31 16:24:36.574596 | orchestrator | Saturday 31 May 2025 16:22:38 +0000 (0:00:01.656) 0:00:04.190 ********** 2025-05-31 16:24:36.574608 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:24:36.574620 | orchestrator | 2025-05-31 16:24:36.574632 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-05-31 16:24:36.574644 | orchestrator | Saturday 31 May 2025 16:22:39 +0000 (0:00:00.713) 0:00:04.903 ********** 2025-05-31 16:24:36.574666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-31 16:24:36.574681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-31 16:24:36.574694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-31 16:24:36.574728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-31 16:24:36.574759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-31 16:24:36.574772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-31 16:24:36.574784 | orchestrator | 2025-05-31 16:24:36.574795 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-05-31 16:24:36.574806 | orchestrator | Saturday 31 May 2025 16:22:42 +0000 (0:00:03.381) 0:00:08.284 ********** 2025-05-31 16:24:36.574818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-31 16:24:36.574840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-31 16:24:36.574853 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:24:36.574871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-31 16:24:36.574883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-31 16:24:36.574895 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:24:36.574906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-31 16:24:36.574964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-31 16:24:36.574976 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:24:36.574987 | orchestrator | 2025-05-31 16:24:36.574998 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-05-31 16:24:36.575009 | orchestrator | Saturday 31 May 2025 16:22:43 +0000 (0:00:00.641) 0:00:08.925 ********** 2025-05-31 16:24:36.575026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-31 16:24:36.575039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-31 16:24:36.575050 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:24:36.575062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-31 16:24:36.575120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-31 16:24:36.575141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-31 16:24:36.575153 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:24:36.575165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-31 16:24:36.575177 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:24:36.575187 | orchestrator | 2025-05-31 16:24:36.575198 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-05-31 16:24:36.575228 | orchestrator | Saturday 31 May 2025 16:22:44 +0000 (0:00:00.979) 0:00:09.904 ********** 2025-05-31 16:24:36.575240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-31 16:24:36.575271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-31 16:24:36.575283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-31 16:24:36.575315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-31 16:24:36.575329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-31 16:24:36.575354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-31 16:24:36.575366 | orchestrator | 2025-05-31 16:24:36.575377 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-05-31 16:24:36.575387 | orchestrator | Saturday 31 May 2025 16:22:46 +0000 (0:00:02.461) 0:00:12.365 ********** 2025-05-31 16:24:36.575398 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:24:36.575409 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:24:36.575420 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:24:36.575430 | orchestrator | 2025-05-31 16:24:36.575441 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-05-31 16:24:36.575452 | orchestrator | Saturday 31 May 2025 16:22:50 +0000 (0:00:03.604) 0:00:15.970 ********** 2025-05-31 16:24:36.575462 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:24:36.575473 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:24:36.575484 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:24:36.575494 | orchestrator | 2025-05-31 16:24:36.575505 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-05-31 16:24:36.575515 | orchestrator | Saturday 31 May 2025 16:22:52 +0000 (0:00:01.667) 0:00:17.638 ********** 2025-05-31 16:24:36.575656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-31 16:24:36.575672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-31 16:24:36.575691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-31 16:24:36.575728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-31 16:24:36.575749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-31 16:24:36.575762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-31 16:24:36.575781 | orchestrator | 2025-05-31 16:24:36.575792 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-31 16:24:36.575803 | orchestrator | Saturday 31 May 2025 16:22:54 +0000 (0:00:02.437) 0:00:20.075 ********** 2025-05-31 16:24:36.575814 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:24:36.575825 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:24:36.575835 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:24:36.575846 | orchestrator | 2025-05-31 16:24:36.575856 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-31 16:24:36.575866 | orchestrator | Saturday 31 May 2025 16:22:54 +0000 (0:00:00.390) 0:00:20.466 ********** 2025-05-31 16:24:36.575877 | orchestrator | 2025-05-31 16:24:36.575888 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-31 16:24:36.575898 | orchestrator | Saturday 31 May 2025 16:22:55 +0000 (0:00:00.147) 0:00:20.613 ********** 2025-05-31 16:24:36.575909 | orchestrator | 2025-05-31 16:24:36.575940 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-31 16:24:36.575950 | orchestrator | Saturday 31 May 2025 16:22:55 +0000 (0:00:00.054) 0:00:20.668 ********** 2025-05-31 16:24:36.575961 | orchestrator | 2025-05-31 16:24:36.575972 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-05-31 16:24:36.575982 | orchestrator | Saturday 31 May 2025 16:22:55 +0000 (0:00:00.056) 0:00:20.724 ********** 2025-05-31 16:24:36.575993 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:24:36.576003 | orchestrator | 2025-05-31 16:24:36.576014 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-05-31 16:24:36.576024 | orchestrator | Saturday 31 May 2025 16:22:55 +0000 (0:00:00.238) 0:00:20.963 ********** 2025-05-31 16:24:36.576035 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:24:36.576045 | orchestrator | 2025-05-31 16:24:36.576056 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-05-31 16:24:36.576066 | orchestrator | Saturday 31 May 2025 16:22:56 +0000 (0:00:00.710) 0:00:21.674 ********** 2025-05-31 16:24:36.576077 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:24:36.576087 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:24:36.576097 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:24:36.576108 | orchestrator | 2025-05-31 16:24:36.576118 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-05-31 16:24:36.576129 | orchestrator | Saturday 31 May 2025 16:23:25 +0000 (0:00:29.648) 0:00:51.322 ********** 2025-05-31 16:24:36.576144 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:24:36.576155 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:24:36.576166 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:24:36.576176 | orchestrator | 2025-05-31 16:24:36.576187 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-31 16:24:36.576198 | orchestrator | Saturday 31 May 2025 16:24:21 +0000 (0:00:55.653) 0:01:46.976 ********** 2025-05-31 16:24:36.576208 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:24:36.576219 | orchestrator | 2025-05-31 16:24:36.576229 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-05-31 16:24:36.576240 | orchestrator | Saturday 31 May 2025 16:24:22 +0000 (0:00:00.644) 0:01:47.620 ********** 2025-05-31 16:24:36.576251 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:24:36.576261 | orchestrator | 2025-05-31 16:24:36.576278 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-05-31 16:24:36.576291 | orchestrator | Saturday 31 May 2025 16:24:24 +0000 (0:00:02.853) 0:01:50.473 ********** 2025-05-31 16:24:36.576302 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:24:36.576314 | orchestrator | 2025-05-31 16:24:36.576325 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-05-31 16:24:36.576337 | orchestrator | Saturday 31 May 2025 16:24:27 +0000 (0:00:02.628) 0:01:53.101 ********** 2025-05-31 16:24:36.576349 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:24:36.576361 | orchestrator | 2025-05-31 16:24:36.576372 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-05-31 16:24:36.576384 | orchestrator | Saturday 31 May 2025 16:24:30 +0000 (0:00:03.049) 0:01:56.151 ********** 2025-05-31 16:24:36.576396 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:24:36.576408 | orchestrator | 2025-05-31 16:24:36.576425 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:24:36.576439 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-31 16:24:36.576453 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-31 16:24:36.576465 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-31 16:24:36.576477 | orchestrator | 2025-05-31 16:24:36.576489 | orchestrator | 2025-05-31 16:24:36.576501 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 16:24:36.576513 | orchestrator | Saturday 31 May 2025 16:24:33 +0000 (0:00:03.231) 0:01:59.383 ********** 2025-05-31 16:24:36.576525 | orchestrator | =============================================================================== 2025-05-31 16:24:36.576537 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 55.65s 2025-05-31 16:24:36.576548 | orchestrator | opensearch : Restart opensearch container ------------------------------ 29.65s 2025-05-31 16:24:36.576559 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.60s 2025-05-31 16:24:36.576572 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.38s 2025-05-31 16:24:36.576583 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 3.23s 2025-05-31 16:24:36.576595 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.05s 2025-05-31 16:24:36.576607 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.85s 2025-05-31 16:24:36.576618 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.63s 2025-05-31 16:24:36.576629 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.46s 2025-05-31 16:24:36.576639 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.44s 2025-05-31 16:24:36.576649 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.67s 2025-05-31 16:24:36.576660 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.66s 2025-05-31 16:24:36.576670 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.98s 2025-05-31 16:24:36.576681 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.77s 2025-05-31 16:24:36.576691 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.71s 2025-05-31 16:24:36.576701 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.71s 2025-05-31 16:24:36.576712 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.70s 2025-05-31 16:24:36.576722 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.64s 2025-05-31 16:24:36.576733 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.64s 2025-05-31 16:24:36.576750 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.40s 2025-05-31 16:24:39.616765 | orchestrator | 2025-05-31 16:24:39 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:24:39.617600 | orchestrator | 2025-05-31 16:24:39 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:24:39.622265 | orchestrator | 2025-05-31 16:24:39 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:24:39.622295 | orchestrator | 2025-05-31 16:24:39 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:24:42.673836 | orchestrator | 2025-05-31 16:24:42 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:24:42.676614 | orchestrator | 2025-05-31 16:24:42 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:24:42.678373 | orchestrator | 2025-05-31 16:24:42 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:24:42.678682 | orchestrator | 2025-05-31 16:24:42 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:24:45.730441 | orchestrator | 2025-05-31 16:24:45 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:24:45.738133 | orchestrator | 2025-05-31 16:24:45 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:24:45.739627 | orchestrator | 2025-05-31 16:24:45 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:24:45.739651 | orchestrator | 2025-05-31 16:24:45 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:24:48.787717 | orchestrator | 2025-05-31 16:24:48 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:24:48.790261 | orchestrator | 2025-05-31 16:24:48 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:24:48.792335 | orchestrator | 2025-05-31 16:24:48 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:24:48.792365 | orchestrator | 2025-05-31 16:24:48 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:24:51.844841 | orchestrator | 2025-05-31 16:24:51 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:24:51.846126 | orchestrator | 2025-05-31 16:24:51 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:24:51.847659 | orchestrator | 2025-05-31 16:24:51 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:24:51.847692 | orchestrator | 2025-05-31 16:24:51 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:24:54.893175 | orchestrator | 2025-05-31 16:24:54 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:24:54.893581 | orchestrator | 2025-05-31 16:24:54 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:24:54.895219 | orchestrator | 2025-05-31 16:24:54 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:24:54.895636 | orchestrator | 2025-05-31 16:24:54 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:24:57.945407 | orchestrator | 2025-05-31 16:24:57 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:24:57.946138 | orchestrator | 2025-05-31 16:24:57 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:24:57.947256 | orchestrator | 2025-05-31 16:24:57 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:24:57.947306 | orchestrator | 2025-05-31 16:24:57 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:25:00.998875 | orchestrator | 2025-05-31 16:25:00 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:25:00.999640 | orchestrator | 2025-05-31 16:25:00 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:25:01.000082 | orchestrator | 2025-05-31 16:25:00 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:25:01.000112 | orchestrator | 2025-05-31 16:25:00 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:25:04.057069 | orchestrator | 2025-05-31 16:25:04 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:25:04.058888 | orchestrator | 2025-05-31 16:25:04 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:25:04.060820 | orchestrator | 2025-05-31 16:25:04 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:25:04.060853 | orchestrator | 2025-05-31 16:25:04 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:25:07.105709 | orchestrator | 2025-05-31 16:25:07 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:25:07.107206 | orchestrator | 2025-05-31 16:25:07 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:25:07.108647 | orchestrator | 2025-05-31 16:25:07 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:25:07.108680 | orchestrator | 2025-05-31 16:25:07 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:25:10.164077 | orchestrator | 2025-05-31 16:25:10 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:25:10.166387 | orchestrator | 2025-05-31 16:25:10 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:25:10.168683 | orchestrator | 2025-05-31 16:25:10 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:25:10.168729 | orchestrator | 2025-05-31 16:25:10 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:25:13.217932 | orchestrator | 2025-05-31 16:25:13 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:25:13.219345 | orchestrator | 2025-05-31 16:25:13 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:25:13.221955 | orchestrator | 2025-05-31 16:25:13 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:25:13.221977 | orchestrator | 2025-05-31 16:25:13 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:25:16.276080 | orchestrator | 2025-05-31 16:25:16 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:25:16.277377 | orchestrator | 2025-05-31 16:25:16 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:25:16.278798 | orchestrator | 2025-05-31 16:25:16 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:25:16.278827 | orchestrator | 2025-05-31 16:25:16 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:25:19.329118 | orchestrator | 2025-05-31 16:25:19 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:25:19.330294 | orchestrator | 2025-05-31 16:25:19 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:25:19.331964 | orchestrator | 2025-05-31 16:25:19 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:25:19.331994 | orchestrator | 2025-05-31 16:25:19 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:25:22.386130 | orchestrator | 2025-05-31 16:25:22 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:25:22.387449 | orchestrator | 2025-05-31 16:25:22 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:25:22.388671 | orchestrator | 2025-05-31 16:25:22 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:25:22.388858 | orchestrator | 2025-05-31 16:25:22 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:25:25.432842 | orchestrator | 2025-05-31 16:25:25 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:25:25.434262 | orchestrator | 2025-05-31 16:25:25 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:25:25.437376 | orchestrator | 2025-05-31 16:25:25 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:25:25.437394 | orchestrator | 2025-05-31 16:25:25 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:25:28.481838 | orchestrator | 2025-05-31 16:25:28 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:25:28.482988 | orchestrator | 2025-05-31 16:25:28 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:25:28.485163 | orchestrator | 2025-05-31 16:25:28 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:25:28.485204 | orchestrator | 2025-05-31 16:25:28 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:25:31.529939 | orchestrator | 2025-05-31 16:25:31 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:25:31.530591 | orchestrator | 2025-05-31 16:25:31 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:25:31.531508 | orchestrator | 2025-05-31 16:25:31 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:25:31.531534 | orchestrator | 2025-05-31 16:25:31 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:25:34.574217 | orchestrator | 2025-05-31 16:25:34 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:25:34.575105 | orchestrator | 2025-05-31 16:25:34 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:25:34.576432 | orchestrator | 2025-05-31 16:25:34 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:25:34.576457 | orchestrator | 2025-05-31 16:25:34 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:25:37.626363 | orchestrator | 2025-05-31 16:25:37 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:25:37.626543 | orchestrator | 2025-05-31 16:25:37 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:25:37.627792 | orchestrator | 2025-05-31 16:25:37 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:25:37.627963 | orchestrator | 2025-05-31 16:25:37 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:25:40.683279 | orchestrator | 2025-05-31 16:25:40 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:25:40.687102 | orchestrator | 2025-05-31 16:25:40 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state STARTED 2025-05-31 16:25:40.689103 | orchestrator | 2025-05-31 16:25:40 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:25:40.689216 | orchestrator | 2025-05-31 16:25:40 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:25:43.724844 | orchestrator | 2025-05-31 16:25:43 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:25:43.729237 | orchestrator | 2025-05-31 16:25:43 | INFO  | Task d840efa2-28d1-47f0-92c2-c28d489f1135 is in state SUCCESS 2025-05-31 16:25:43.729653 | orchestrator | 2025-05-31 16:25:43.732580 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-31 16:25:43.732659 | orchestrator | 2025-05-31 16:25:43.732753 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-05-31 16:25:43.732765 | orchestrator | 2025-05-31 16:25:43.732776 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-05-31 16:25:43.732787 | orchestrator | Saturday 31 May 2025 16:13:36 +0000 (0:00:01.601) 0:00:01.601 ********** 2025-05-31 16:25:43.732798 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:25:43.732811 | orchestrator | 2025-05-31 16:25:43.732821 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-05-31 16:25:43.732832 | orchestrator | Saturday 31 May 2025 16:13:37 +0000 (0:00:01.218) 0:00:02.819 ********** 2025-05-31 16:25:43.732843 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-31 16:25:43.732854 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-05-31 16:25:43.732865 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-05-31 16:25:43.732875 | orchestrator | 2025-05-31 16:25:43.732917 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-05-31 16:25:43.732928 | orchestrator | Saturday 31 May 2025 16:13:37 +0000 (0:00:00.478) 0:00:03.298 ********** 2025-05-31 16:25:43.732941 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:25:43.732952 | orchestrator | 2025-05-31 16:25:43.732962 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-05-31 16:25:43.732973 | orchestrator | Saturday 31 May 2025 16:13:38 +0000 (0:00:01.161) 0:00:04.459 ********** 2025-05-31 16:25:43.732984 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.732994 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.733005 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.733015 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.733032 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.733043 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.733053 | orchestrator | 2025-05-31 16:25:43.733064 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-05-31 16:25:43.733074 | orchestrator | Saturday 31 May 2025 16:13:40 +0000 (0:00:01.454) 0:00:05.914 ********** 2025-05-31 16:25:43.733085 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.733096 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.733106 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.733116 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.733127 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.733137 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.733148 | orchestrator | 2025-05-31 16:25:43.733158 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-05-31 16:25:43.733169 | orchestrator | Saturday 31 May 2025 16:13:41 +0000 (0:00:00.982) 0:00:06.897 ********** 2025-05-31 16:25:43.733185 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.733203 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.733223 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.733234 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.733245 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.733255 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.733270 | orchestrator | 2025-05-31 16:25:43.733289 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-05-31 16:25:43.733308 | orchestrator | Saturday 31 May 2025 16:13:42 +0000 (0:00:01.170) 0:00:08.068 ********** 2025-05-31 16:25:43.733325 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.733345 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.733364 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.733405 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.733424 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.733443 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.733462 | orchestrator | 2025-05-31 16:25:43.733482 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-05-31 16:25:43.733495 | orchestrator | Saturday 31 May 2025 16:13:43 +0000 (0:00:00.813) 0:00:08.881 ********** 2025-05-31 16:25:43.733506 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.733516 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.733526 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.733537 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.733547 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.733558 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.733568 | orchestrator | 2025-05-31 16:25:43.733595 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-05-31 16:25:43.733606 | orchestrator | Saturday 31 May 2025 16:13:43 +0000 (0:00:00.610) 0:00:09.492 ********** 2025-05-31 16:25:43.733616 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.733626 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.733637 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.733647 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.733658 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.733668 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.733679 | orchestrator | 2025-05-31 16:25:43.733689 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-05-31 16:25:43.733700 | orchestrator | Saturday 31 May 2025 16:13:44 +0000 (0:00:00.949) 0:00:10.441 ********** 2025-05-31 16:25:43.733711 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.733728 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.733747 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.733765 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.733785 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.733804 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.733815 | orchestrator | 2025-05-31 16:25:43.733826 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-05-31 16:25:43.733837 | orchestrator | Saturday 31 May 2025 16:13:45 +0000 (0:00:00.621) 0:00:11.063 ********** 2025-05-31 16:25:43.733848 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.733858 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.733869 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.733905 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.733918 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.733928 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.733939 | orchestrator | 2025-05-31 16:25:43.733965 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-31 16:25:43.733976 | orchestrator | Saturday 31 May 2025 16:13:46 +0000 (0:00:00.822) 0:00:11.886 ********** 2025-05-31 16:25:43.733987 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-31 16:25:43.733998 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-31 16:25:43.734008 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-31 16:25:43.734077 | orchestrator | 2025-05-31 16:25:43.734091 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-05-31 16:25:43.734102 | orchestrator | Saturday 31 May 2025 16:13:47 +0000 (0:00:00.805) 0:00:12.691 ********** 2025-05-31 16:25:43.734112 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.734123 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.734133 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.734144 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.734166 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.734177 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.734188 | orchestrator | 2025-05-31 16:25:43.734203 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-05-31 16:25:43.734223 | orchestrator | Saturday 31 May 2025 16:13:48 +0000 (0:00:01.282) 0:00:13.974 ********** 2025-05-31 16:25:43.734259 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-31 16:25:43.734280 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-31 16:25:43.734300 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-31 16:25:43.734319 | orchestrator | 2025-05-31 16:25:43.734340 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-05-31 16:25:43.734359 | orchestrator | Saturday 31 May 2025 16:13:51 +0000 (0:00:03.069) 0:00:17.043 ********** 2025-05-31 16:25:43.734379 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-31 16:25:43.734400 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-31 16:25:43.734422 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-31 16:25:43.734442 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.734463 | orchestrator | 2025-05-31 16:25:43.734483 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-05-31 16:25:43.734505 | orchestrator | Saturday 31 May 2025 16:13:51 +0000 (0:00:00.408) 0:00:17.452 ********** 2025-05-31 16:25:43.734528 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-31 16:25:43.734543 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-31 16:25:43.734557 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-31 16:25:43.734576 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.734595 | orchestrator | 2025-05-31 16:25:43.734613 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-05-31 16:25:43.734631 | orchestrator | Saturday 31 May 2025 16:13:52 +0000 (0:00:00.585) 0:00:18.038 ********** 2025-05-31 16:25:43.734660 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-31 16:25:43.734747 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-31 16:25:43.734770 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-31 16:25:43.734789 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.734809 | orchestrator | 2025-05-31 16:25:43.734828 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-05-31 16:25:43.734859 | orchestrator | Saturday 31 May 2025 16:13:52 +0000 (0:00:00.160) 0:00:18.198 ********** 2025-05-31 16:25:43.734931 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-31 16:13:49.162270', 'end': '2025-05-31 16:13:49.437818', 'delta': '0:00:00.275548', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-31 16:25:43.735003 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-31 16:13:50.043682', 'end': '2025-05-31 16:13:50.309738', 'delta': '0:00:00.266056', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-31 16:25:43.735026 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-31 16:13:51.164991', 'end': '2025-05-31 16:13:51.392734', 'delta': '0:00:00.227743', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-31 16:25:43.735047 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.735125 | orchestrator | 2025-05-31 16:25:43.735195 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-05-31 16:25:43.735209 | orchestrator | Saturday 31 May 2025 16:13:52 +0000 (0:00:00.190) 0:00:18.389 ********** 2025-05-31 16:25:43.735288 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.735299 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.735310 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.735321 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.735331 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.735342 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.735352 | orchestrator | 2025-05-31 16:25:43.735363 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-05-31 16:25:43.735374 | orchestrator | Saturday 31 May 2025 16:13:54 +0000 (0:00:01.368) 0:00:19.758 ********** 2025-05-31 16:25:43.735384 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.735395 | orchestrator | 2025-05-31 16:25:43.735406 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-05-31 16:25:43.735418 | orchestrator | Saturday 31 May 2025 16:13:54 +0000 (0:00:00.693) 0:00:20.452 ********** 2025-05-31 16:25:43.735436 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.735455 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.735483 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.735502 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.735521 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.735538 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.735557 | orchestrator | 2025-05-31 16:25:43.735577 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-05-31 16:25:43.735596 | orchestrator | Saturday 31 May 2025 16:13:55 +0000 (0:00:00.691) 0:00:21.143 ********** 2025-05-31 16:25:43.735611 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.735637 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.735655 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.735673 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.735693 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.735711 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.735730 | orchestrator | 2025-05-31 16:25:43.735750 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-31 16:25:43.735762 | orchestrator | Saturday 31 May 2025 16:13:56 +0000 (0:00:00.890) 0:00:22.034 ********** 2025-05-31 16:25:43.735776 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.735794 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.735813 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.735832 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.735851 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.735870 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.735924 | orchestrator | 2025-05-31 16:25:43.735945 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-05-31 16:25:43.735963 | orchestrator | Saturday 31 May 2025 16:13:57 +0000 (0:00:00.676) 0:00:22.711 ********** 2025-05-31 16:25:43.735995 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.736006 | orchestrator | 2025-05-31 16:25:43.736017 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-05-31 16:25:43.736028 | orchestrator | Saturday 31 May 2025 16:13:57 +0000 (0:00:00.112) 0:00:22.823 ********** 2025-05-31 16:25:43.736039 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.736049 | orchestrator | 2025-05-31 16:25:43.736060 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-31 16:25:43.736070 | orchestrator | Saturday 31 May 2025 16:13:57 +0000 (0:00:00.505) 0:00:23.329 ********** 2025-05-31 16:25:43.736081 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.736092 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.736102 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.736113 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.736123 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.736134 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.736144 | orchestrator | 2025-05-31 16:25:43.736155 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-05-31 16:25:43.736166 | orchestrator | Saturday 31 May 2025 16:13:58 +0000 (0:00:00.431) 0:00:23.761 ********** 2025-05-31 16:25:43.736176 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.736187 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.736197 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.736208 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.736219 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.736229 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.736240 | orchestrator | 2025-05-31 16:25:43.736250 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-05-31 16:25:43.736261 | orchestrator | Saturday 31 May 2025 16:13:58 +0000 (0:00:00.718) 0:00:24.480 ********** 2025-05-31 16:25:43.736272 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.736282 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.736292 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.736303 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.736314 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.736324 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.736334 | orchestrator | 2025-05-31 16:25:43.736345 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-05-31 16:25:43.736356 | orchestrator | Saturday 31 May 2025 16:13:59 +0000 (0:00:00.846) 0:00:25.326 ********** 2025-05-31 16:25:43.736367 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.736377 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.736388 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.736398 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.736438 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.736457 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.736474 | orchestrator | 2025-05-31 16:25:43.736485 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-05-31 16:25:43.736496 | orchestrator | Saturday 31 May 2025 16:14:00 +0000 (0:00:00.708) 0:00:26.034 ********** 2025-05-31 16:25:43.736510 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.736528 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.736547 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.736566 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.736584 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.736600 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.736611 | orchestrator | 2025-05-31 16:25:43.736622 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-05-31 16:25:43.736632 | orchestrator | Saturday 31 May 2025 16:14:01 +0000 (0:00:00.551) 0:00:26.586 ********** 2025-05-31 16:25:43.736643 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.736653 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.736664 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.736674 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.736684 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.736695 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.736705 | orchestrator | 2025-05-31 16:25:43.736716 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-31 16:25:43.736727 | orchestrator | Saturday 31 May 2025 16:14:01 +0000 (0:00:00.673) 0:00:27.259 ********** 2025-05-31 16:25:43.736737 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.736747 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.736758 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.736768 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.736779 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.736789 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.736800 | orchestrator | 2025-05-31 16:25:43.736817 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-05-31 16:25:43.736828 | orchestrator | Saturday 31 May 2025 16:14:02 +0000 (0:00:00.598) 0:00:27.857 ********** 2025-05-31 16:25:43.736839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.736851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.736872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.736914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.736946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.736965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.736977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.736988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.736999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.737015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.737026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.737037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.737063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4907d85-21a9-4777-b71a-1559c505de70', 'scsi-SQEMU_QEMU_HARDDISK_f4907d85-21a9-4777-b71a-1559c505de70'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4907d85-21a9-4777-b71a-1559c505de70-part1', 'scsi-SQEMU_QEMU_HARDDISK_f4907d85-21a9-4777-b71a-1559c505de70-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4907d85-21a9-4777-b71a-1559c505de70-part14', 'scsi-SQEMU_QEMU_HARDDISK_f4907d85-21a9-4777-b71a-1559c505de70-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4907d85-21a9-4777-b71a-1559c505de70-part15', 'scsi-SQEMU_QEMU_HARDDISK_f4907d85-21a9-4777-b71a-1559c505de70-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4907d85-21a9-4777-b71a-1559c505de70-part16', 'scsi-SQEMU_QEMU_HARDDISK_f4907d85-21a9-4777-b71a-1559c505de70-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 16:25:43.737085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-31-15-28-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 16:25:43.737103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.737114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.737125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.737143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.737172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce505b94-3a05-45a8-8440-80c284efa891', 'scsi-SQEMU_QEMU_HARDDISK_ce505b94-3a05-45a8-8440-80c284efa891'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce505b94-3a05-45a8-8440-80c284efa891-part1', 'scsi-SQEMU_QEMU_HARDDISK_ce505b94-3a05-45a8-8440-80c284efa891-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce505b94-3a05-45a8-8440-80c284efa891-part14', 'scsi-SQEMU_QEMU_HARDDISK_ce505b94-3a05-45a8-8440-80c284efa891-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce505b94-3a05-45a8-8440-80c284efa891-part15', 'scsi-SQEMU_QEMU_HARDDISK_ce505b94-3a05-45a8-8440-80c284efa891-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce505b94-3a05-45a8-8440-80c284efa891-part16', 'scsi-SQEMU_QEMU_HARDDISK_ce505b94-3a05-45a8-8440-80c284efa891-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 16:25:43.737194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-31-15-28-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 16:25:43.737213 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.737234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.737246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.737270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.737331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.737354 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.737374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.737394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.737412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.737431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.737470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e1159f44-add9-48c4-ad36-81abc037f539', 'scsi-SQEMU_QEMU_HARDDISK_e1159f44-add9-48c4-ad36-81abc037f539'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e1159f44-add9-48c4-ad36-81abc037f539-part1', 'scsi-SQEMU_QEMU_HARDDISK_e1159f44-add9-48c4-ad36-81abc037f539-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e1159f44-add9-48c4-ad36-81abc037f539-part14', 'scsi-SQEMU_QEMU_HARDDISK_e1159f44-add9-48c4-ad36-81abc037f539-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e1159f44-add9-48c4-ad36-81abc037f539-part15', 'scsi-SQEMU_QEMU_HARDDISK_e1159f44-add9-48c4-ad36-81abc037f539-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e1159f44-add9-48c4-ad36-81abc037f539-part16', 'scsi-SQEMU_QEMU_HARDDISK_e1159f44-add9-48c4-ad36-81abc037f539-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 16:25:43.737503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-31-15-28-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 16:25:43.737523 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e43a14fa--64bd--59a3--8350--23173f11027f-osd--block--e43a14fa--64bd--59a3--8350--23173f11027f', 'dm-uuid-LVM-KjcMReimo5PsGyzXZpJsMXXL4dS0YiPoTecRVN1Fy57MdjoKWR3FvNsX0MOcMulr'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.737537 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--92adfeec--5c5c--5208--b88e--9a01a071247e-osd--block--92adfeec--5c5c--5208--b88e--9a01a071247e', 'dm-uuid-LVM-5L4hRZnla14bwaKnoVbRc8bkaoz3f9wwe6EmGBv2eh7By4XPR3zw4G1eX0Emizbu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.737548 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.737559 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.737575 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.737586 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.737624 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.737654 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.737676 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.737694 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.737706 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.737725 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_934f4ed8-ee22-4056-bf57-9bb86fd501b4', 'scsi-SQEMU_QEMU_HARDDISK_934f4ed8-ee22-4056-bf57-9bb86fd501b4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_934f4ed8-ee22-4056-bf57-9bb86fd501b4-part1', 'scsi-SQEMU_QEMU_HARDDISK_934f4ed8-ee22-4056-bf57-9bb86fd501b4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_934f4ed8-ee22-4056-bf57-9bb86fd501b4-part14', 'scsi-SQEMU_QEMU_HARDDISK_934f4ed8-ee22-4056-bf57-9bb86fd501b4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_934f4ed8-ee22-4056-bf57-9bb86fd501b4-part15', 'scsi-SQEMU_QEMU_HARDDISK_934f4ed8-ee22-4056-bf57-9bb86fd501b4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_934f4ed8-ee22-4056-bf57-9bb86fd501b4-part16', 'scsi-SQEMU_QEMU_HARDDISK_934f4ed8-ee22-4056-bf57-9bb86fd501b4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 16:25:43.737757 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e43a14fa--64bd--59a3--8350--23173f11027f-osd--block--e43a14fa--64bd--59a3--8350--23173f11027f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JQswj1-y2Ph-Xuu3-deb4-fEd9-oYwM-slHKsQ', 'scsi-0QEMU_QEMU_HARDDISK_dcaed5f8-6e74-4dcd-98b8-10fdfb9502ae', 'scsi-SQEMU_QEMU_HARDDISK_dcaed5f8-6e74-4dcd-98b8-10fdfb9502ae'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 16:25:43.737790 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ad7aff40--0fc1--546d--9ec3--a4c69926416d-osd--block--ad7aff40--0fc1--546d--9ec3--a4c69926416d', 'dm-uuid-LVM-gSL583eMFM2i8rm1dadR9TUAB3PZW3cKY6xXChbVmHt2kudAn3jbfuE7DPOsuThG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.737810 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--92adfeec--5c5c--5208--b88e--9a01a071247e-osd--block--92adfeec--5c5c--5208--b88e--9a01a071247e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PSWQNF-Gqqg-V0D3-TdNi-9gjD-0Aeq-QHIN7Y', 'scsi-0QEMU_QEMU_HARDDISK_35c478e1-8eef-4047-84ea-c6dce0624e72', 'scsi-SQEMU_QEMU_HARDDISK_35c478e1-8eef-4047-84ea-c6dce0624e72'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 16:25:43.737825 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--02409adc--b936--5a4c--b212--7809fa63c72a-osd--block--02409adc--b936--5a4c--b212--7809fa63c72a', 'dm-uuid-LVM-GGIt5bSU2nDGA03xHpuGJYokBEYl6P8PZMjzNu1e81SWhC4M3JehoGEPbmfjWgoM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.737837 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_433a7fb9-b2f1-41fe-ae71-e8a6b0aff1da', 'scsi-SQEMU_QEMU_HARDDISK_433a7fb9-b2f1-41fe-ae71-e8a6b0aff1da'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 16:25:43.737858 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.737870 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-31-15-28-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 16:25:43.737938 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.737952 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.737963 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.737974 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.737985 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.737996 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.738007 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.738062 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.738091 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec58de5b-e589-4299-a3da-23bfa2e4200b', 'scsi-SQEMU_QEMU_HARDDISK_ec58de5b-e589-4299-a3da-23bfa2e4200b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec58de5b-e589-4299-a3da-23bfa2e4200b-part1', 'scsi-SQEMU_QEMU_HARDDISK_ec58de5b-e589-4299-a3da-23bfa2e4200b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec58de5b-e589-4299-a3da-23bfa2e4200b-part14', 'scsi-SQEMU_QEMU_HARDDISK_ec58de5b-e589-4299-a3da-23bfa2e4200b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec58de5b-e589-4299-a3da-23bfa2e4200b-part15', 'scsi-SQEMU_QEMU_HARDDISK_ec58de5b-e589-4299-a3da-23bfa2e4200b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec58de5b-e589-4299-a3da-23bfa2e4200b-part16', 'scsi-SQEMU_QEMU_HARDDISK_ec58de5b-e589-4299-a3da-23bfa2e4200b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 16:25:43.738112 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ad7aff40--0fc1--546d--9ec3--a4c69926416d-osd--block--ad7aff40--0fc1--546d--9ec3--a4c69926416d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JVV0Tk-kRsG-tLPh-LFPx-aKlE-9rbR-GOFlU8', 'scsi-0QEMU_QEMU_HARDDISK_ddb9130a-4326-47f1-9e34-5e5625e80e81', 'scsi-SQEMU_QEMU_HARDDISK_ddb9130a-4326-47f1-9e34-5e5625e80e81'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 16:25:43.738125 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--02409adc--b936--5a4c--b212--7809fa63c72a-osd--block--02409adc--b936--5a4c--b212--7809fa63c72a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AqzP4V-1dni-XqoZ-OxyJ-fowG-XCMJ-LJnZWK', 'scsi-0QEMU_QEMU_HARDDISK_5b4bcfff-b004-43b2-aa93-003eb1863ed5', 'scsi-SQEMU_QEMU_HARDDISK_5b4bcfff-b004-43b2-aa93-003eb1863ed5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 16:25:43.738136 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f41ad415-f570-4f0b-8f25-7db49ff0cbfa', 'scsi-SQEMU_QEMU_HARDDISK_f41ad415-f570-4f0b-8f25-7db49ff0cbfa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 16:25:43.738176 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-31-15-28-18-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 16:25:43.738210 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.738233 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6a818804--e2a7--5d8b--beae--a4acf44277a5-osd--block--6a818804--e2a7--5d8b--beae--a4acf44277a5', 'dm-uuid-LVM-0jf03cHtqG7zvC3qv0JSCL53Z5bdp1zXF3YoZvYkZLEFb7aLtJQklGGjm6Z8Trxc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.738253 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8b45f5b5--5599--560e--b955--f5f9e148b85f-osd--block--8b45f5b5--5599--560e--b955--f5f9e148b85f', 'dm-uuid-LVM-75ZA7Zi1Js7x7tZot6FcZ90qIF7l3KM08jpzcoMbfQPbddcGSUZ7j5KLc7a3a4mj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.738273 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.738293 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.738312 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.738329 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.738340 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.738364 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.738375 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.738392 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:25:43.738426 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c2c2026-b533-49c5-9163-99a4bbff9cf3', 'scsi-SQEMU_QEMU_HARDDISK_4c2c2026-b533-49c5-9163-99a4bbff9cf3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c2c2026-b533-49c5-9163-99a4bbff9cf3-part1', 'scsi-SQEMU_QEMU_HARDDISK_4c2c2026-b533-49c5-9163-99a4bbff9cf3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c2c2026-b533-49c5-9163-99a4bbff9cf3-part14', 'scsi-SQEMU_QEMU_HARDDISK_4c2c2026-b533-49c5-9163-99a4bbff9cf3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c2c2026-b533-49c5-9163-99a4bbff9cf3-part15', 'scsi-SQEMU_QEMU_HARDDISK_4c2c2026-b533-49c5-9163-99a4bbff9cf3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c2c2026-b533-49c5-9163-99a4bbff9cf3-part16', 'scsi-SQEMU_QEMU_HARDDISK_4c2c2026-b533-49c5-9163-99a4bbff9cf3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 16:25:43.738447 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6a818804--e2a7--5d8b--beae--a4acf44277a5-osd--block--6a818804--e2a7--5d8b--beae--a4acf44277a5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-igRBdo-6hUg-NeJy-a5BC-R1WQ-ghaE-dsu4Fq', 'scsi-0QEMU_QEMU_HARDDISK_88f31b31-d9a0-4986-b3f2-c890facc2af6', 'scsi-SQEMU_QEMU_HARDDISK_88f31b31-d9a0-4986-b3f2-c890facc2af6'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 16:25:43.738473 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8b45f5b5--5599--560e--b955--f5f9e148b85f-osd--block--8b45f5b5--5599--560e--b955--f5f9e148b85f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cBBI9B-Yzab-mAyO-KuMO-MtUG-4AUI-cM0kWu', 'scsi-0QEMU_QEMU_HARDDISK_42e25697-c4c6-4260-bea6-0d0d8bf43604', 'scsi-SQEMU_QEMU_HARDDISK_42e25697-c4c6-4260-bea6-0d0d8bf43604'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 16:25:43.738491 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_86312f09-330b-437e-8315-0e2c008d5fbe', 'scsi-SQEMU_QEMU_HARDDISK_86312f09-330b-437e-8315-0e2c008d5fbe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 16:25:43.738506 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-31-15-28-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 16:25:43.738525 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.738545 | orchestrator | 2025-05-31 16:25:43.738564 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-05-31 16:25:43.738583 | orchestrator | Saturday 31 May 2025 16:14:04 +0000 (0:00:01.803) 0:00:29.661 ********** 2025-05-31 16:25:43.738601 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.738613 | orchestrator | 2025-05-31 16:25:43.738623 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-05-31 16:25:43.738634 | orchestrator | Saturday 31 May 2025 16:14:04 +0000 (0:00:00.282) 0:00:29.944 ********** 2025-05-31 16:25:43.738644 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.738655 | orchestrator | 2025-05-31 16:25:43.738666 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-05-31 16:25:43.738676 | orchestrator | Saturday 31 May 2025 16:14:04 +0000 (0:00:00.168) 0:00:30.112 ********** 2025-05-31 16:25:43.738687 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.738697 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.738708 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.738718 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.738729 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.738739 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.738750 | orchestrator | 2025-05-31 16:25:43.738761 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-05-31 16:25:43.738771 | orchestrator | Saturday 31 May 2025 16:14:05 +0000 (0:00:00.795) 0:00:30.908 ********** 2025-05-31 16:25:43.738782 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.738793 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.738803 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.738876 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.739000 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.739034 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.739054 | orchestrator | 2025-05-31 16:25:43.739135 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-05-31 16:25:43.739149 | orchestrator | Saturday 31 May 2025 16:14:06 +0000 (0:00:01.272) 0:00:32.181 ********** 2025-05-31 16:25:43.739159 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.739171 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.739190 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.739209 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.739228 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.739246 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.739291 | orchestrator | 2025-05-31 16:25:43.739303 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-31 16:25:43.739358 | orchestrator | Saturday 31 May 2025 16:14:07 +0000 (0:00:00.776) 0:00:32.958 ********** 2025-05-31 16:25:43.739369 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.739381 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.739400 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.739419 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.739436 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.739454 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.739472 | orchestrator | 2025-05-31 16:25:43.739488 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-31 16:25:43.739505 | orchestrator | Saturday 31 May 2025 16:14:08 +0000 (0:00:01.015) 0:00:33.973 ********** 2025-05-31 16:25:43.739520 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.739535 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.739545 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.739555 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.739570 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.739580 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.739589 | orchestrator | 2025-05-31 16:25:43.739599 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-31 16:25:43.739608 | orchestrator | Saturday 31 May 2025 16:14:09 +0000 (0:00:00.725) 0:00:34.698 ********** 2025-05-31 16:25:43.739617 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.739627 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.739636 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.739645 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.739654 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.739663 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.739673 | orchestrator | 2025-05-31 16:25:43.739682 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-31 16:25:43.739692 | orchestrator | Saturday 31 May 2025 16:14:10 +0000 (0:00:01.248) 0:00:35.947 ********** 2025-05-31 16:25:43.739701 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.739710 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.739719 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.739729 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.739738 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.739747 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.739756 | orchestrator | 2025-05-31 16:25:43.739766 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-05-31 16:25:43.739775 | orchestrator | Saturday 31 May 2025 16:14:11 +0000 (0:00:00.968) 0:00:36.916 ********** 2025-05-31 16:25:43.739785 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-31 16:25:43.739803 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-31 16:25:43.739813 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-31 16:25:43.739823 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-31 16:25:43.739832 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-31 16:25:43.739851 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-31 16:25:43.739861 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.739871 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-31 16:25:43.739904 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-31 16:25:43.739915 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-31 16:25:43.739924 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-31 16:25:43.739933 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.739943 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-31 16:25:43.739952 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-31 16:25:43.739961 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.739971 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-31 16:25:43.739980 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-31 16:25:43.739989 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.739999 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-31 16:25:43.740008 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-31 16:25:43.740017 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.740027 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-31 16:25:43.740037 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-31 16:25:43.740046 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.740058 | orchestrator | 2025-05-31 16:25:43.740074 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-05-31 16:25:43.740091 | orchestrator | Saturday 31 May 2025 16:14:14 +0000 (0:00:03.532) 0:00:40.448 ********** 2025-05-31 16:25:43.740109 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-31 16:25:43.740125 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-31 16:25:43.740141 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-31 16:25:43.740151 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-31 16:25:43.740160 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-31 16:25:43.740169 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-31 16:25:43.740179 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.740188 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-31 16:25:43.740197 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-31 16:25:43.740206 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-31 16:25:43.740215 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-31 16:25:43.740224 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.740234 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-31 16:25:43.740243 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.740252 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-31 16:25:43.740262 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.740271 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-31 16:25:43.740280 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-31 16:25:43.740289 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-31 16:25:43.740299 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-31 16:25:43.740308 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-31 16:25:43.740317 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.740326 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-31 16:25:43.740335 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.740345 | orchestrator | 2025-05-31 16:25:43.740354 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-05-31 16:25:43.740371 | orchestrator | Saturday 31 May 2025 16:14:16 +0000 (0:00:01.704) 0:00:42.153 ********** 2025-05-31 16:25:43.740389 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-31 16:25:43.740406 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-05-31 16:25:43.740418 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-31 16:25:43.740427 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-05-31 16:25:43.740437 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-05-31 16:25:43.740446 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-31 16:25:43.740463 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-05-31 16:25:43.740479 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-05-31 16:25:43.740496 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-05-31 16:25:43.740512 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-05-31 16:25:43.740529 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-05-31 16:25:43.740545 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-05-31 16:25:43.740562 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-05-31 16:25:43.740578 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-05-31 16:25:43.740595 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-05-31 16:25:43.740609 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-05-31 16:25:43.740627 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-05-31 16:25:43.740644 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-05-31 16:25:43.740660 | orchestrator | 2025-05-31 16:25:43.740677 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-05-31 16:25:43.740704 | orchestrator | Saturday 31 May 2025 16:14:20 +0000 (0:00:04.326) 0:00:46.480 ********** 2025-05-31 16:25:43.740722 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-31 16:25:43.740738 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-31 16:25:43.740755 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-31 16:25:43.740772 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-31 16:25:43.740782 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-31 16:25:43.740792 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-31 16:25:43.740801 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.740810 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-31 16:25:43.740820 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-31 16:25:43.740829 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.740838 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-31 16:25:43.740848 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-31 16:25:43.740857 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.740867 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-31 16:25:43.740876 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-31 16:25:43.740911 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-31 16:25:43.740921 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-31 16:25:43.740931 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.740940 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-31 16:25:43.740955 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.740972 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-31 16:25:43.740985 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-31 16:25:43.740994 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-31 16:25:43.741004 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.741013 | orchestrator | 2025-05-31 16:25:43.741022 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-05-31 16:25:43.741064 | orchestrator | Saturday 31 May 2025 16:14:22 +0000 (0:00:01.245) 0:00:47.725 ********** 2025-05-31 16:25:43.741075 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-31 16:25:43.741085 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-31 16:25:43.741094 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-31 16:25:43.741103 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.741112 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-31 16:25:43.741127 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-31 16:25:43.741145 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-31 16:25:43.741163 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.741180 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-31 16:25:43.741199 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-31 16:25:43.741211 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-31 16:25:43.741220 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-31 16:25:43.741229 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-31 16:25:43.741239 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-31 16:25:43.741248 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.741258 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-31 16:25:43.741274 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-31 16:25:43.741291 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-31 16:25:43.741300 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.741312 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.741328 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-31 16:25:43.741345 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-31 16:25:43.741361 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-31 16:25:43.741385 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.741403 | orchestrator | 2025-05-31 16:25:43.741420 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-05-31 16:25:43.741436 | orchestrator | Saturday 31 May 2025 16:14:23 +0000 (0:00:01.528) 0:00:49.253 ********** 2025-05-31 16:25:43.741454 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-05-31 16:25:43.741471 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-31 16:25:43.741488 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-31 16:25:43.741503 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-31 16:25:43.741513 | orchestrator | ok: [testbed-node-1] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'}) 2025-05-31 16:25:43.741522 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-31 16:25:43.741531 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-31 16:25:43.741541 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-31 16:25:43.741550 | orchestrator | ok: [testbed-node-2] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'}) 2025-05-31 16:25:43.741568 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-31 16:25:43.741577 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-31 16:25:43.741587 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-31 16:25:43.741596 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-31 16:25:43.741618 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.741636 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-31 16:25:43.741651 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-31 16:25:43.741667 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.741677 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-31 16:25:43.741687 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-31 16:25:43.741696 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-31 16:25:43.741705 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.741715 | orchestrator | 2025-05-31 16:25:43.741724 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-05-31 16:25:43.741734 | orchestrator | Saturday 31 May 2025 16:14:24 +0000 (0:00:01.157) 0:00:50.411 ********** 2025-05-31 16:25:43.741743 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.741753 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.741762 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.741772 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:25:43.741781 | orchestrator | 2025-05-31 16:25:43.741791 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-31 16:25:43.741801 | orchestrator | Saturday 31 May 2025 16:14:26 +0000 (0:00:01.419) 0:00:51.830 ********** 2025-05-31 16:25:43.741811 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.741820 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.741829 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.741838 | orchestrator | 2025-05-31 16:25:43.741848 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-31 16:25:43.741857 | orchestrator | Saturday 31 May 2025 16:14:27 +0000 (0:00:00.696) 0:00:52.527 ********** 2025-05-31 16:25:43.741867 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.741876 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.741907 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.741916 | orchestrator | 2025-05-31 16:25:43.741926 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-31 16:25:43.741935 | orchestrator | Saturday 31 May 2025 16:14:27 +0000 (0:00:00.741) 0:00:53.268 ********** 2025-05-31 16:25:43.741945 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.741954 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.741963 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.741973 | orchestrator | 2025-05-31 16:25:43.741982 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-31 16:25:43.741991 | orchestrator | Saturday 31 May 2025 16:14:28 +0000 (0:00:00.674) 0:00:53.943 ********** 2025-05-31 16:25:43.742001 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.742010 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.742513 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.742539 | orchestrator | 2025-05-31 16:25:43.742556 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-31 16:25:43.742569 | orchestrator | Saturday 31 May 2025 16:14:29 +0000 (0:00:01.224) 0:00:55.168 ********** 2025-05-31 16:25:43.742579 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 16:25:43.742588 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 16:25:43.742598 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 16:25:43.742607 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.742616 | orchestrator | 2025-05-31 16:25:43.742633 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-31 16:25:43.742653 | orchestrator | Saturday 31 May 2025 16:14:30 +0000 (0:00:00.690) 0:00:55.859 ********** 2025-05-31 16:25:43.742663 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 16:25:43.742672 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 16:25:43.742682 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 16:25:43.742691 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.742756 | orchestrator | 2025-05-31 16:25:43.742766 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-31 16:25:43.742775 | orchestrator | Saturday 31 May 2025 16:14:30 +0000 (0:00:00.600) 0:00:56.459 ********** 2025-05-31 16:25:43.742785 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 16:25:43.743262 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 16:25:43.743282 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 16:25:43.743290 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.743298 | orchestrator | 2025-05-31 16:25:43.743306 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-31 16:25:43.743314 | orchestrator | Saturday 31 May 2025 16:14:31 +0000 (0:00:00.717) 0:00:57.176 ********** 2025-05-31 16:25:43.743321 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.743329 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.743355 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.743363 | orchestrator | 2025-05-31 16:25:43.743379 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-31 16:25:43.743397 | orchestrator | Saturday 31 May 2025 16:14:32 +0000 (0:00:00.475) 0:00:57.652 ********** 2025-05-31 16:25:43.743405 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-31 16:25:43.743412 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-31 16:25:43.743420 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-31 16:25:43.743428 | orchestrator | 2025-05-31 16:25:43.743436 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-31 16:25:43.743443 | orchestrator | Saturday 31 May 2025 16:14:33 +0000 (0:00:00.925) 0:00:58.577 ********** 2025-05-31 16:25:43.743465 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.743473 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.743481 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.743491 | orchestrator | 2025-05-31 16:25:43.743504 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-31 16:25:43.743522 | orchestrator | Saturday 31 May 2025 16:14:33 +0000 (0:00:00.507) 0:00:59.085 ********** 2025-05-31 16:25:43.743539 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.743578 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.743592 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.743642 | orchestrator | 2025-05-31 16:25:43.743656 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-31 16:25:43.743664 | orchestrator | Saturday 31 May 2025 16:14:34 +0000 (0:00:00.722) 0:00:59.808 ********** 2025-05-31 16:25:43.743672 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-31 16:25:43.743680 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-31 16:25:43.743688 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.743696 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.743712 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-31 16:25:43.743721 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.743728 | orchestrator | 2025-05-31 16:25:43.743741 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-31 16:25:43.743753 | orchestrator | Saturday 31 May 2025 16:14:35 +0000 (0:00:00.974) 0:01:00.782 ********** 2025-05-31 16:25:43.743812 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-31 16:25:43.743826 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.743839 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-31 16:25:43.743915 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.743926 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-31 16:25:43.743936 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.743945 | orchestrator | 2025-05-31 16:25:43.743953 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-31 16:25:43.743962 | orchestrator | Saturday 31 May 2025 16:14:36 +0000 (0:00:00.862) 0:01:01.644 ********** 2025-05-31 16:25:43.743971 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 16:25:43.743980 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 16:25:43.743989 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 16:25:43.743998 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.744006 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-31 16:25:43.744020 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-31 16:25:43.744032 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-31 16:25:43.744045 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.744059 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-31 16:25:43.744070 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-31 16:25:43.744078 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-31 16:25:43.744085 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.744094 | orchestrator | 2025-05-31 16:25:43.744101 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-05-31 16:25:43.744109 | orchestrator | Saturday 31 May 2025 16:14:37 +0000 (0:00:01.174) 0:01:02.819 ********** 2025-05-31 16:25:43.744117 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.744125 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.744139 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.744147 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.744155 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.744162 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.744170 | orchestrator | 2025-05-31 16:25:43.744177 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-05-31 16:25:43.744185 | orchestrator | Saturday 31 May 2025 16:14:38 +0000 (0:00:00.967) 0:01:03.787 ********** 2025-05-31 16:25:43.744192 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-31 16:25:43.744200 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-31 16:25:43.744208 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-31 16:25:43.744216 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-31 16:25:43.744223 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-31 16:25:43.744231 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-31 16:25:43.744238 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-31 16:25:43.744246 | orchestrator | 2025-05-31 16:25:43.744254 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-05-31 16:25:43.744261 | orchestrator | Saturday 31 May 2025 16:14:39 +0000 (0:00:00.982) 0:01:04.770 ********** 2025-05-31 16:25:43.744269 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-31 16:25:43.744286 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-31 16:25:43.744295 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-31 16:25:43.744302 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-31 16:25:43.744330 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-31 16:25:43.744339 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-31 16:25:43.744346 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-31 16:25:43.744354 | orchestrator | 2025-05-31 16:25:43.744361 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-31 16:25:43.744369 | orchestrator | Saturday 31 May 2025 16:14:41 +0000 (0:00:02.102) 0:01:06.873 ********** 2025-05-31 16:25:43.744385 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:25:43.744399 | orchestrator | 2025-05-31 16:25:43.744409 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-31 16:25:43.744417 | orchestrator | Saturday 31 May 2025 16:14:42 +0000 (0:00:00.964) 0:01:07.837 ********** 2025-05-31 16:25:43.744425 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.744432 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.744440 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.744448 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.744456 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.744463 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.744471 | orchestrator | 2025-05-31 16:25:43.744479 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-31 16:25:43.744486 | orchestrator | Saturday 31 May 2025 16:14:43 +0000 (0:00:00.832) 0:01:08.669 ********** 2025-05-31 16:25:43.744494 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.744501 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.744509 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.744517 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.744524 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.744532 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.744539 | orchestrator | 2025-05-31 16:25:43.744547 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-31 16:25:43.744555 | orchestrator | Saturday 31 May 2025 16:14:44 +0000 (0:00:01.166) 0:01:09.836 ********** 2025-05-31 16:25:43.744562 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.744570 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.744577 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.744585 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.744593 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.744600 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.744608 | orchestrator | 2025-05-31 16:25:43.744615 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-31 16:25:43.744623 | orchestrator | Saturday 31 May 2025 16:14:45 +0000 (0:00:01.243) 0:01:11.080 ********** 2025-05-31 16:25:43.744630 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.744638 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.744645 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.744653 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.744661 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.744668 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.744676 | orchestrator | 2025-05-31 16:25:43.744683 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-31 16:25:43.744691 | orchestrator | Saturday 31 May 2025 16:14:46 +0000 (0:00:00.962) 0:01:12.042 ********** 2025-05-31 16:25:43.744699 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.744706 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.744714 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.744721 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.744729 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.744737 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.744744 | orchestrator | 2025-05-31 16:25:43.744752 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-31 16:25:43.744776 | orchestrator | Saturday 31 May 2025 16:14:47 +0000 (0:00:00.829) 0:01:12.872 ********** 2025-05-31 16:25:43.744784 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.744792 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.744800 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.744808 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.744816 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.744823 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.744831 | orchestrator | 2025-05-31 16:25:43.744839 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-31 16:25:43.744847 | orchestrator | Saturday 31 May 2025 16:14:47 +0000 (0:00:00.567) 0:01:13.439 ********** 2025-05-31 16:25:43.744854 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.744862 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.744870 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.744893 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.744907 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.744919 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.744930 | orchestrator | 2025-05-31 16:25:43.744943 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-31 16:25:43.744958 | orchestrator | Saturday 31 May 2025 16:14:48 +0000 (0:00:00.636) 0:01:14.076 ********** 2025-05-31 16:25:43.744971 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.744984 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.744997 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.745010 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.745017 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.745025 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.745033 | orchestrator | 2025-05-31 16:25:43.745040 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-31 16:25:43.745049 | orchestrator | Saturday 31 May 2025 16:14:49 +0000 (0:00:00.643) 0:01:14.719 ********** 2025-05-31 16:25:43.745070 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.745085 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.745098 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.745108 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.745116 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.745124 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.745132 | orchestrator | 2025-05-31 16:25:43.745145 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-31 16:25:43.745159 | orchestrator | Saturday 31 May 2025 16:14:50 +0000 (0:00:01.074) 0:01:15.794 ********** 2025-05-31 16:25:43.745172 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.745183 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.745191 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.745227 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.745236 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.745243 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.745251 | orchestrator | 2025-05-31 16:25:43.745258 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-31 16:25:43.745266 | orchestrator | Saturday 31 May 2025 16:14:51 +0000 (0:00:00.853) 0:01:16.648 ********** 2025-05-31 16:25:43.745274 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.745282 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.745289 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.745297 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.745305 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.745312 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.745320 | orchestrator | 2025-05-31 16:25:43.745328 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-31 16:25:43.745336 | orchestrator | Saturday 31 May 2025 16:14:52 +0000 (0:00:01.160) 0:01:17.808 ********** 2025-05-31 16:25:43.745343 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.745357 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.745365 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.745373 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.745380 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.745388 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.745396 | orchestrator | 2025-05-31 16:25:43.745403 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-31 16:25:43.745411 | orchestrator | Saturday 31 May 2025 16:14:53 +0000 (0:00:00.690) 0:01:18.498 ********** 2025-05-31 16:25:43.745419 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.745427 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.745434 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.745442 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.745449 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.745457 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.745467 | orchestrator | 2025-05-31 16:25:43.745480 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-31 16:25:43.745494 | orchestrator | Saturday 31 May 2025 16:14:53 +0000 (0:00:00.704) 0:01:19.203 ********** 2025-05-31 16:25:43.745502 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.745509 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.745517 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.745525 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.745533 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.745547 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.745560 | orchestrator | 2025-05-31 16:25:43.745574 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-31 16:25:43.745587 | orchestrator | Saturday 31 May 2025 16:14:54 +0000 (0:00:00.538) 0:01:19.741 ********** 2025-05-31 16:25:43.745600 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.745614 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.745627 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.745639 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.745652 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.745659 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.745667 | orchestrator | 2025-05-31 16:25:43.745675 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-31 16:25:43.745683 | orchestrator | Saturday 31 May 2025 16:14:54 +0000 (0:00:00.700) 0:01:20.441 ********** 2025-05-31 16:25:43.745690 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.745698 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.745705 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.745713 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.745721 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.745728 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.745736 | orchestrator | 2025-05-31 16:25:43.745744 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-31 16:25:43.745756 | orchestrator | Saturday 31 May 2025 16:14:55 +0000 (0:00:00.693) 0:01:21.135 ********** 2025-05-31 16:25:43.745763 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.745771 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.745779 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.745786 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.745794 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.745801 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.745809 | orchestrator | 2025-05-31 16:25:43.745816 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-31 16:25:43.745824 | orchestrator | Saturday 31 May 2025 16:14:56 +0000 (0:00:00.673) 0:01:21.808 ********** 2025-05-31 16:25:43.745832 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.745839 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.745847 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.745854 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.745862 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.745975 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.746080 | orchestrator | 2025-05-31 16:25:43.746093 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-31 16:25:43.746101 | orchestrator | Saturday 31 May 2025 16:14:56 +0000 (0:00:00.558) 0:01:22.366 ********** 2025-05-31 16:25:43.746109 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.746117 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.746125 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.746133 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.746140 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.746148 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.746156 | orchestrator | 2025-05-31 16:25:43.746163 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-31 16:25:43.746183 | orchestrator | Saturday 31 May 2025 16:14:57 +0000 (0:00:00.642) 0:01:23.009 ********** 2025-05-31 16:25:43.746191 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.746199 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.746206 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.746214 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.746222 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.746229 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.746237 | orchestrator | 2025-05-31 16:25:43.746245 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-31 16:25:43.746252 | orchestrator | Saturday 31 May 2025 16:14:58 +0000 (0:00:00.550) 0:01:23.560 ********** 2025-05-31 16:25:43.746260 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.746266 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.746273 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.746279 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.746286 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.746292 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.746299 | orchestrator | 2025-05-31 16:25:43.746305 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-31 16:25:43.746312 | orchestrator | Saturday 31 May 2025 16:14:58 +0000 (0:00:00.858) 0:01:24.418 ********** 2025-05-31 16:25:43.746318 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.746324 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.746331 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.746337 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.746344 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.746350 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.746357 | orchestrator | 2025-05-31 16:25:43.746363 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-31 16:25:43.746370 | orchestrator | Saturday 31 May 2025 16:14:59 +0000 (0:00:00.810) 0:01:25.228 ********** 2025-05-31 16:25:43.746376 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.746383 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.746389 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.746395 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.746402 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.746408 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.746415 | orchestrator | 2025-05-31 16:25:43.746421 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-31 16:25:43.746428 | orchestrator | Saturday 31 May 2025 16:15:00 +0000 (0:00:01.007) 0:01:26.236 ********** 2025-05-31 16:25:43.746434 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.746441 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.746447 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.746454 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.746460 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.746467 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.746473 | orchestrator | 2025-05-31 16:25:43.746479 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-31 16:25:43.746486 | orchestrator | Saturday 31 May 2025 16:15:01 +0000 (0:00:00.450) 0:01:26.687 ********** 2025-05-31 16:25:43.746500 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.746507 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.746513 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.746520 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.746527 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.746533 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.746539 | orchestrator | 2025-05-31 16:25:43.746546 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-31 16:25:43.746552 | orchestrator | Saturday 31 May 2025 16:15:01 +0000 (0:00:00.575) 0:01:27.262 ********** 2025-05-31 16:25:43.746559 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.746566 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.746572 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.746578 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.746585 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.746591 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.746598 | orchestrator | 2025-05-31 16:25:43.746604 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-31 16:25:43.746611 | orchestrator | Saturday 31 May 2025 16:15:02 +0000 (0:00:00.583) 0:01:27.846 ********** 2025-05-31 16:25:43.746617 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.746624 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.746630 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.746637 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.746643 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.746670 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.746678 | orchestrator | 2025-05-31 16:25:43.746684 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-31 16:25:43.746691 | orchestrator | Saturday 31 May 2025 16:15:03 +0000 (0:00:00.676) 0:01:28.523 ********** 2025-05-31 16:25:43.746697 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.746704 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.746710 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.746717 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.746723 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.746729 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.746736 | orchestrator | 2025-05-31 16:25:43.746742 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-31 16:25:43.746749 | orchestrator | Saturday 31 May 2025 16:15:03 +0000 (0:00:00.528) 0:01:29.051 ********** 2025-05-31 16:25:43.746755 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.746762 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.746768 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.746774 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.746781 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.746787 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.746794 | orchestrator | 2025-05-31 16:25:43.746800 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-31 16:25:43.746807 | orchestrator | Saturday 31 May 2025 16:15:04 +0000 (0:00:00.672) 0:01:29.724 ********** 2025-05-31 16:25:43.746813 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.746820 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.746839 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.746846 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.746852 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.746859 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.746865 | orchestrator | 2025-05-31 16:25:43.746872 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-31 16:25:43.746895 | orchestrator | Saturday 31 May 2025 16:15:04 +0000 (0:00:00.523) 0:01:30.248 ********** 2025-05-31 16:25:43.746911 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.746918 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.746924 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.746931 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.746937 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.746943 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.746950 | orchestrator | 2025-05-31 16:25:43.746956 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-31 16:25:43.746963 | orchestrator | Saturday 31 May 2025 16:15:05 +0000 (0:00:00.659) 0:01:30.908 ********** 2025-05-31 16:25:43.746969 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.746976 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.746982 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.746988 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.746995 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.747001 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.747008 | orchestrator | 2025-05-31 16:25:43.747014 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-31 16:25:43.747051 | orchestrator | Saturday 31 May 2025 16:15:06 +0000 (0:00:00.596) 0:01:31.505 ********** 2025-05-31 16:25:43.747058 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-31 16:25:43.747064 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-31 16:25:43.747071 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.747077 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-31 16:25:43.747084 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-31 16:25:43.747090 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.747097 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-31 16:25:43.747103 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-31 16:25:43.747110 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.747116 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-31 16:25:43.747122 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-31 16:25:43.747129 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.747135 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-31 16:25:43.747142 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-31 16:25:43.747149 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.747155 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-31 16:25:43.747161 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-31 16:25:43.747168 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.747174 | orchestrator | 2025-05-31 16:25:43.747181 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-31 16:25:43.747187 | orchestrator | Saturday 31 May 2025 16:15:06 +0000 (0:00:00.837) 0:01:32.342 ********** 2025-05-31 16:25:43.747194 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-31 16:25:43.747200 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-31 16:25:43.747207 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.747213 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-31 16:25:43.747220 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-31 16:25:43.747226 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.747233 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-31 16:25:43.747239 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-31 16:25:43.747246 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.747252 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-31 16:25:43.747258 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-31 16:25:43.747265 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.747271 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-31 16:25:43.747278 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-31 16:25:43.747300 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.747310 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-31 16:25:43.747316 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-31 16:25:43.747323 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.747329 | orchestrator | 2025-05-31 16:25:43.747336 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-31 16:25:43.747343 | orchestrator | Saturday 31 May 2025 16:15:07 +0000 (0:00:00.673) 0:01:33.016 ********** 2025-05-31 16:25:43.747349 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.747355 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.747362 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.747369 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.747375 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.747440 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.747447 | orchestrator | 2025-05-31 16:25:43.747453 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-31 16:25:43.747460 | orchestrator | Saturday 31 May 2025 16:15:08 +0000 (0:00:00.849) 0:01:33.865 ********** 2025-05-31 16:25:43.747467 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.747473 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.747480 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.747486 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.747493 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.747499 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.747505 | orchestrator | 2025-05-31 16:25:43.747512 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-31 16:25:43.747524 | orchestrator | Saturday 31 May 2025 16:15:09 +0000 (0:00:00.828) 0:01:34.694 ********** 2025-05-31 16:25:43.747531 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.747538 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.747544 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.747551 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.747557 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.747564 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.747571 | orchestrator | 2025-05-31 16:25:43.747577 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-31 16:25:43.747584 | orchestrator | Saturday 31 May 2025 16:15:10 +0000 (0:00:01.032) 0:01:35.726 ********** 2025-05-31 16:25:43.747590 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.747597 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.747603 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.747610 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.747616 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.747622 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.747629 | orchestrator | 2025-05-31 16:25:43.747635 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-31 16:25:43.747642 | orchestrator | Saturday 31 May 2025 16:15:10 +0000 (0:00:00.623) 0:01:36.349 ********** 2025-05-31 16:25:43.747648 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.747655 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.747661 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.747668 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.747674 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.747681 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.747687 | orchestrator | 2025-05-31 16:25:43.747693 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-31 16:25:43.747700 | orchestrator | Saturday 31 May 2025 16:15:11 +0000 (0:00:01.085) 0:01:37.435 ********** 2025-05-31 16:25:43.747707 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.747713 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.747719 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.747730 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.747737 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.747743 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.747750 | orchestrator | 2025-05-31 16:25:43.747756 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-31 16:25:43.747763 | orchestrator | Saturday 31 May 2025 16:15:12 +0000 (0:00:00.820) 0:01:38.256 ********** 2025-05-31 16:25:43.747770 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-31 16:25:43.747776 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-31 16:25:43.747783 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-31 16:25:43.747790 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.747796 | orchestrator | 2025-05-31 16:25:43.747803 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-31 16:25:43.747809 | orchestrator | Saturday 31 May 2025 16:15:13 +0000 (0:00:00.437) 0:01:38.693 ********** 2025-05-31 16:25:43.747816 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-31 16:25:43.747822 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-31 16:25:43.747829 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-31 16:25:43.747835 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.747842 | orchestrator | 2025-05-31 16:25:43.747848 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-31 16:25:43.747855 | orchestrator | Saturday 31 May 2025 16:15:14 +0000 (0:00:00.877) 0:01:39.571 ********** 2025-05-31 16:25:43.747861 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-31 16:25:43.747868 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-31 16:25:43.747874 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-31 16:25:43.747894 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.747901 | orchestrator | 2025-05-31 16:25:43.747908 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-31 16:25:43.747914 | orchestrator | Saturday 31 May 2025 16:15:14 +0000 (0:00:00.601) 0:01:40.173 ********** 2025-05-31 16:25:43.747921 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.747927 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.747934 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.747940 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.747946 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.747957 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.747964 | orchestrator | 2025-05-31 16:25:43.747970 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-31 16:25:43.747977 | orchestrator | Saturday 31 May 2025 16:15:15 +0000 (0:00:00.600) 0:01:40.773 ********** 2025-05-31 16:25:43.747983 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-31 16:25:43.747990 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.747996 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-31 16:25:43.748003 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.748009 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-31 16:25:43.748015 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.748022 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-31 16:25:43.748028 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.748035 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-31 16:25:43.748041 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.748048 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-31 16:25:43.748054 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.748061 | orchestrator | 2025-05-31 16:25:43.748067 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-31 16:25:43.748074 | orchestrator | Saturday 31 May 2025 16:15:16 +0000 (0:00:00.958) 0:01:41.731 ********** 2025-05-31 16:25:43.748080 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.748091 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.748098 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.748104 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.748111 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.748117 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.748124 | orchestrator | 2025-05-31 16:25:43.748134 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-31 16:25:43.748141 | orchestrator | Saturday 31 May 2025 16:15:16 +0000 (0:00:00.598) 0:01:42.330 ********** 2025-05-31 16:25:43.748148 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.748154 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.748161 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.748167 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.748174 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.748180 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.748186 | orchestrator | 2025-05-31 16:25:43.748193 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-31 16:25:43.748200 | orchestrator | Saturday 31 May 2025 16:15:17 +0000 (0:00:00.775) 0:01:43.105 ********** 2025-05-31 16:25:43.748206 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-31 16:25:43.748213 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.748219 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-31 16:25:43.748225 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.748232 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-31 16:25:43.748238 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.748245 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-31 16:25:43.748251 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.748258 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-31 16:25:43.748264 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.748270 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-31 16:25:43.748277 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.748283 | orchestrator | 2025-05-31 16:25:43.748290 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-31 16:25:43.748296 | orchestrator | Saturday 31 May 2025 16:15:18 +0000 (0:00:00.739) 0:01:43.845 ********** 2025-05-31 16:25:43.748303 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.748309 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.748316 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.748322 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-31 16:25:43.748328 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.748335 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-31 16:25:43.748342 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.748348 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-31 16:25:43.748355 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.748361 | orchestrator | 2025-05-31 16:25:43.748368 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-31 16:25:43.748374 | orchestrator | Saturday 31 May 2025 16:15:19 +0000 (0:00:00.788) 0:01:44.634 ********** 2025-05-31 16:25:43.748381 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-31 16:25:43.748387 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-31 16:25:43.748394 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-31 16:25:43.748400 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-31 16:25:43.748407 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-31 16:25:43.748413 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-31 16:25:43.748424 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.748430 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-31 16:25:43.748437 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-31 16:25:43.748443 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-31 16:25:43.748450 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.748456 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 16:25:43.748463 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 16:25:43.748469 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.748476 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 16:25:43.748485 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-31 16:25:43.748492 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-31 16:25:43.748498 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-31 16:25:43.748505 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.748511 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.748518 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-31 16:25:43.748524 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-31 16:25:43.748530 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-31 16:25:43.748537 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.748543 | orchestrator | 2025-05-31 16:25:43.748550 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-31 16:25:43.748556 | orchestrator | Saturday 31 May 2025 16:15:20 +0000 (0:00:01.487) 0:01:46.121 ********** 2025-05-31 16:25:43.748563 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.748569 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.748576 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.748582 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.748589 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.748595 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.748602 | orchestrator | 2025-05-31 16:25:43.748608 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-31 16:25:43.748615 | orchestrator | Saturday 31 May 2025 16:15:21 +0000 (0:00:01.165) 0:01:47.286 ********** 2025-05-31 16:25:43.748621 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.748628 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.748638 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.748644 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-31 16:25:43.748651 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.748657 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-31 16:25:43.748664 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.748670 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-31 16:25:43.748677 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.748683 | orchestrator | 2025-05-31 16:25:43.748690 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-31 16:25:43.748697 | orchestrator | Saturday 31 May 2025 16:15:22 +0000 (0:00:01.179) 0:01:48.466 ********** 2025-05-31 16:25:43.748703 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.748710 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.748716 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.748723 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.748729 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.748736 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.748742 | orchestrator | 2025-05-31 16:25:43.748748 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-31 16:25:43.748755 | orchestrator | Saturday 31 May 2025 16:15:24 +0000 (0:00:01.192) 0:01:49.659 ********** 2025-05-31 16:25:43.748762 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.748768 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.748779 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.748785 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.748791 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.748798 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.748804 | orchestrator | 2025-05-31 16:25:43.748811 | orchestrator | TASK [ceph-container-common : generate systemd ceph-mon target file] *********** 2025-05-31 16:25:43.748817 | orchestrator | Saturday 31 May 2025 16:15:25 +0000 (0:00:01.191) 0:01:50.851 ********** 2025-05-31 16:25:43.748824 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:25:43.748830 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:43.748837 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:25:43.748843 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:25:43.748850 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:25:43.748856 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:25:43.748863 | orchestrator | 2025-05-31 16:25:43.748869 | orchestrator | TASK [ceph-container-common : enable ceph.target] ****************************** 2025-05-31 16:25:43.748876 | orchestrator | Saturday 31 May 2025 16:15:26 +0000 (0:00:01.336) 0:01:52.188 ********** 2025-05-31 16:25:43.748914 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:25:43.748920 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:43.748927 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:25:43.748933 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:25:43.748940 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:25:43.748946 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:25:43.748953 | orchestrator | 2025-05-31 16:25:43.748959 | orchestrator | TASK [ceph-container-common : include prerequisites.yml] *********************** 2025-05-31 16:25:43.748966 | orchestrator | Saturday 31 May 2025 16:15:28 +0000 (0:00:02.175) 0:01:54.364 ********** 2025-05-31 16:25:43.748973 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:25:43.748980 | orchestrator | 2025-05-31 16:25:43.748987 | orchestrator | TASK [ceph-container-common : stop lvmetad] ************************************ 2025-05-31 16:25:43.748994 | orchestrator | Saturday 31 May 2025 16:15:29 +0000 (0:00:01.108) 0:01:55.472 ********** 2025-05-31 16:25:43.749000 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.749007 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.749013 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.749019 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.749026 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.749032 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.749039 | orchestrator | 2025-05-31 16:25:43.749045 | orchestrator | TASK [ceph-container-common : disable and mask lvmetad service] **************** 2025-05-31 16:25:43.749052 | orchestrator | Saturday 31 May 2025 16:15:30 +0000 (0:00:00.601) 0:01:56.073 ********** 2025-05-31 16:25:43.749058 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.749065 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.749071 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.749078 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.749084 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.749090 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.749097 | orchestrator | 2025-05-31 16:25:43.749107 | orchestrator | TASK [ceph-container-common : remove ceph udev rules] ************************** 2025-05-31 16:25:43.749114 | orchestrator | Saturday 31 May 2025 16:15:31 +0000 (0:00:00.780) 0:01:56.854 ********** 2025-05-31 16:25:43.749120 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-31 16:25:43.749127 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-31 16:25:43.749133 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-31 16:25:43.749140 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-31 16:25:43.749146 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-31 16:25:43.749158 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-31 16:25:43.749164 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-31 16:25:43.749171 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-31 16:25:43.749177 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-31 16:25:43.749184 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-31 16:25:43.749190 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-31 16:25:43.749200 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-31 16:25:43.749207 | orchestrator | 2025-05-31 16:25:43.749214 | orchestrator | TASK [ceph-container-common : ensure tmpfiles.d is present] ******************** 2025-05-31 16:25:43.749220 | orchestrator | Saturday 31 May 2025 16:15:32 +0000 (0:00:01.256) 0:01:58.110 ********** 2025-05-31 16:25:43.749227 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:43.749233 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:25:43.749240 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:25:43.749246 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:25:43.749252 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:25:43.749259 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:25:43.749265 | orchestrator | 2025-05-31 16:25:43.749272 | orchestrator | TASK [ceph-container-common : restore certificates selinux context] ************ 2025-05-31 16:25:43.749279 | orchestrator | Saturday 31 May 2025 16:15:33 +0000 (0:00:01.226) 0:01:59.337 ********** 2025-05-31 16:25:43.749285 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.749292 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.749298 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.749305 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.749311 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.749317 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.749324 | orchestrator | 2025-05-31 16:25:43.749330 | orchestrator | TASK [ceph-container-common : include registry.yml] **************************** 2025-05-31 16:25:43.749337 | orchestrator | Saturday 31 May 2025 16:15:34 +0000 (0:00:00.588) 0:01:59.926 ********** 2025-05-31 16:25:43.749343 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.749350 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.749356 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.749363 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.749369 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.749376 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.749382 | orchestrator | 2025-05-31 16:25:43.749389 | orchestrator | TASK [ceph-container-common : include fetch_image.yml] ************************* 2025-05-31 16:25:43.749395 | orchestrator | Saturday 31 May 2025 16:15:35 +0000 (0:00:00.770) 0:02:00.696 ********** 2025-05-31 16:25:43.749402 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:25:43.749409 | orchestrator | 2025-05-31 16:25:43.749415 | orchestrator | TASK [ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:17.2.7 image] *** 2025-05-31 16:25:43.749422 | orchestrator | Saturday 31 May 2025 16:15:36 +0000 (0:00:01.309) 0:02:02.006 ********** 2025-05-31 16:25:43.749428 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.749435 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.749441 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.749448 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.749454 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.749461 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.749467 | orchestrator | 2025-05-31 16:25:43.749474 | orchestrator | TASK [ceph-container-common : pulling alertmanager/prometheus/grafana container images] *** 2025-05-31 16:25:43.749485 | orchestrator | Saturday 31 May 2025 16:16:16 +0000 (0:00:40.435) 0:02:42.442 ********** 2025-05-31 16:25:43.749491 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-31 16:25:43.749498 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-31 16:25:43.749504 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-31 16:25:43.749511 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.749518 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-31 16:25:43.749524 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-31 16:25:43.749531 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-31 16:25:43.749537 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.749544 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-31 16:25:43.749550 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-31 16:25:43.749556 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-31 16:25:43.749566 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.749573 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-31 16:25:43.749579 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-31 16:25:43.749586 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-31 16:25:43.749592 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.749599 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-31 16:25:43.749605 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-31 16:25:43.749611 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-31 16:25:43.749618 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.749624 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-31 16:25:43.749631 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-31 16:25:43.749637 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-31 16:25:43.749644 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.749650 | orchestrator | 2025-05-31 16:25:43.749657 | orchestrator | TASK [ceph-container-common : pulling node-exporter container image] *********** 2025-05-31 16:25:43.749663 | orchestrator | Saturday 31 May 2025 16:16:17 +0000 (0:00:00.820) 0:02:43.262 ********** 2025-05-31 16:25:43.749670 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.749679 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.749686 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.749693 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.749699 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.749706 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.749712 | orchestrator | 2025-05-31 16:25:43.749718 | orchestrator | TASK [ceph-container-common : export local ceph dev image] ********************* 2025-05-31 16:25:43.749725 | orchestrator | Saturday 31 May 2025 16:16:18 +0000 (0:00:00.599) 0:02:43.861 ********** 2025-05-31 16:25:43.749731 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.749738 | orchestrator | 2025-05-31 16:25:43.749744 | orchestrator | TASK [ceph-container-common : copy ceph dev image file] ************************ 2025-05-31 16:25:43.749751 | orchestrator | Saturday 31 May 2025 16:16:18 +0000 (0:00:00.131) 0:02:43.993 ********** 2025-05-31 16:25:43.749757 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.749764 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.749770 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.749777 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.749783 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.749796 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.749802 | orchestrator | 2025-05-31 16:25:43.749809 | orchestrator | TASK [ceph-container-common : load ceph dev image] ***************************** 2025-05-31 16:25:43.749815 | orchestrator | Saturday 31 May 2025 16:16:19 +0000 (0:00:00.675) 0:02:44.669 ********** 2025-05-31 16:25:43.749821 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.749828 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.749834 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.749841 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.749847 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.749854 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.749860 | orchestrator | 2025-05-31 16:25:43.749867 | orchestrator | TASK [ceph-container-common : remove tmp ceph dev image file] ****************** 2025-05-31 16:25:43.749873 | orchestrator | Saturday 31 May 2025 16:16:19 +0000 (0:00:00.624) 0:02:45.293 ********** 2025-05-31 16:25:43.749892 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.749898 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.749905 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.749911 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.749918 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.749924 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.749931 | orchestrator | 2025-05-31 16:25:43.749937 | orchestrator | TASK [ceph-container-common : get ceph version] ******************************** 2025-05-31 16:25:43.749944 | orchestrator | Saturday 31 May 2025 16:16:20 +0000 (0:00:00.709) 0:02:46.002 ********** 2025-05-31 16:25:43.749950 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.749957 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.749963 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.749970 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.749976 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.749982 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.749989 | orchestrator | 2025-05-31 16:25:43.749995 | orchestrator | TASK [ceph-container-common : set_fact ceph_version ceph_version.stdout.split] *** 2025-05-31 16:25:43.750002 | orchestrator | Saturday 31 May 2025 16:16:22 +0000 (0:00:01.589) 0:02:47.592 ********** 2025-05-31 16:25:43.750008 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.750034 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.750042 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.750049 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.750055 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.750061 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.750068 | orchestrator | 2025-05-31 16:25:43.750074 | orchestrator | TASK [ceph-container-common : include release.yml] ***************************** 2025-05-31 16:25:43.750081 | orchestrator | Saturday 31 May 2025 16:16:22 +0000 (0:00:00.578) 0:02:48.170 ********** 2025-05-31 16:25:43.750088 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:25:43.750095 | orchestrator | 2025-05-31 16:25:43.750102 | orchestrator | TASK [ceph-container-common : set_fact ceph_release jewel] ********************* 2025-05-31 16:25:43.750109 | orchestrator | Saturday 31 May 2025 16:16:23 +0000 (0:00:01.014) 0:02:49.184 ********** 2025-05-31 16:25:43.750115 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.750122 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.750128 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.750135 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.750141 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.750147 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.750154 | orchestrator | 2025-05-31 16:25:43.750164 | orchestrator | TASK [ceph-container-common : set_fact ceph_release kraken] ******************** 2025-05-31 16:25:43.750170 | orchestrator | Saturday 31 May 2025 16:16:24 +0000 (0:00:00.561) 0:02:49.746 ********** 2025-05-31 16:25:43.750177 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.750183 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.750197 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.750204 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.750210 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.750217 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.750223 | orchestrator | 2025-05-31 16:25:43.750230 | orchestrator | TASK [ceph-container-common : set_fact ceph_release luminous] ****************** 2025-05-31 16:25:43.750236 | orchestrator | Saturday 31 May 2025 16:16:25 +0000 (0:00:00.763) 0:02:50.510 ********** 2025-05-31 16:25:43.750243 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.750249 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.750255 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.750262 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.750268 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.750275 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.750281 | orchestrator | 2025-05-31 16:25:43.750288 | orchestrator | TASK [ceph-container-common : set_fact ceph_release mimic] ********************* 2025-05-31 16:25:43.750294 | orchestrator | Saturday 31 May 2025 16:16:25 +0000 (0:00:00.639) 0:02:51.149 ********** 2025-05-31 16:25:43.750301 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.750307 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.750314 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.750320 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.750331 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.750337 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.750344 | orchestrator | 2025-05-31 16:25:43.750350 | orchestrator | TASK [ceph-container-common : set_fact ceph_release nautilus] ****************** 2025-05-31 16:25:43.750357 | orchestrator | Saturday 31 May 2025 16:16:26 +0000 (0:00:00.837) 0:02:51.987 ********** 2025-05-31 16:25:43.750363 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.750370 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.750376 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.750383 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.750389 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.750395 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.750402 | orchestrator | 2025-05-31 16:25:43.750408 | orchestrator | TASK [ceph-container-common : set_fact ceph_release octopus] ******************* 2025-05-31 16:25:43.750415 | orchestrator | Saturday 31 May 2025 16:16:27 +0000 (0:00:00.540) 0:02:52.528 ********** 2025-05-31 16:25:43.750421 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.750428 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.750434 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.750441 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.750447 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.750454 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.750460 | orchestrator | 2025-05-31 16:25:43.750467 | orchestrator | TASK [ceph-container-common : set_fact ceph_release pacific] ******************* 2025-05-31 16:25:43.750473 | orchestrator | Saturday 31 May 2025 16:16:27 +0000 (0:00:00.822) 0:02:53.350 ********** 2025-05-31 16:25:43.750480 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.750486 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.750492 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.750499 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.750505 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.750511 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.750518 | orchestrator | 2025-05-31 16:25:43.750524 | orchestrator | TASK [ceph-container-common : set_fact ceph_release quincy] ******************** 2025-05-31 16:25:43.750531 | orchestrator | Saturday 31 May 2025 16:16:28 +0000 (0:00:00.600) 0:02:53.951 ********** 2025-05-31 16:25:43.750537 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.750544 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.750550 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.750557 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.750563 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.750569 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.750580 | orchestrator | 2025-05-31 16:25:43.750587 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-31 16:25:43.750594 | orchestrator | Saturday 31 May 2025 16:16:29 +0000 (0:00:01.104) 0:02:55.056 ********** 2025-05-31 16:25:43.750600 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:25:43.750607 | orchestrator | 2025-05-31 16:25:43.750613 | orchestrator | TASK [ceph-config : create ceph initial directories] *************************** 2025-05-31 16:25:43.750620 | orchestrator | Saturday 31 May 2025 16:16:30 +0000 (0:00:01.147) 0:02:56.203 ********** 2025-05-31 16:25:43.750626 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-05-31 16:25:43.750633 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-05-31 16:25:43.750639 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-05-31 16:25:43.750646 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-05-31 16:25:43.750653 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-05-31 16:25:43.750659 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-05-31 16:25:43.750666 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-05-31 16:25:43.750672 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-05-31 16:25:43.750679 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-05-31 16:25:43.750685 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-05-31 16:25:43.750692 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-05-31 16:25:43.750698 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-05-31 16:25:43.750704 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-05-31 16:25:43.750711 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-05-31 16:25:43.750721 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-05-31 16:25:43.750727 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-05-31 16:25:43.750734 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-05-31 16:25:43.750740 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-05-31 16:25:43.750747 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-05-31 16:25:43.750753 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-05-31 16:25:43.750760 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-05-31 16:25:43.750766 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-05-31 16:25:43.750772 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-05-31 16:25:43.750779 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-05-31 16:25:43.750785 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-05-31 16:25:43.750792 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-05-31 16:25:43.750798 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-05-31 16:25:43.750805 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-05-31 16:25:43.750811 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-05-31 16:25:43.750817 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-05-31 16:25:43.750824 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-05-31 16:25:43.750834 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-05-31 16:25:43.750841 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-05-31 16:25:43.750847 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-05-31 16:25:43.750854 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-05-31 16:25:43.750860 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-05-31 16:25:43.750867 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-05-31 16:25:43.750891 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-05-31 16:25:43.750898 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-31 16:25:43.750904 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-05-31 16:25:43.750911 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-05-31 16:25:43.750917 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-31 16:25:43.750924 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-31 16:25:43.750930 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-31 16:25:43.750937 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-05-31 16:25:43.750943 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-31 16:25:43.750950 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-05-31 16:25:43.750956 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-31 16:25:43.750963 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-31 16:25:43.750969 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-31 16:25:43.750976 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-31 16:25:43.750982 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-31 16:25:43.750988 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-31 16:25:43.750995 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-31 16:25:43.751001 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-31 16:25:43.751008 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-31 16:25:43.751014 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-31 16:25:43.751021 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-31 16:25:43.751027 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-31 16:25:43.751034 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-31 16:25:43.751040 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-31 16:25:43.751047 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-31 16:25:43.751053 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-31 16:25:43.751060 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-31 16:25:43.751066 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-31 16:25:43.751072 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-31 16:25:43.751079 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-31 16:25:43.751085 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-31 16:25:43.751092 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-31 16:25:43.751098 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-31 16:25:43.751105 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-31 16:25:43.751111 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-31 16:25:43.751121 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-31 16:25:43.751128 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-05-31 16:25:43.751134 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-31 16:25:43.751141 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-31 16:25:43.751152 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-31 16:25:43.751158 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-05-31 16:25:43.751165 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-05-31 16:25:43.751172 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-31 16:25:43.751178 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-05-31 16:25:43.751185 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-05-31 16:25:43.751191 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-31 16:25:43.751198 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-05-31 16:25:43.751204 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-05-31 16:25:43.751211 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-05-31 16:25:43.751217 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-05-31 16:25:43.751224 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-05-31 16:25:43.751230 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-05-31 16:25:43.751241 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-05-31 16:25:43.751247 | orchestrator | 2025-05-31 16:25:43.751254 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-31 16:25:43.751261 | orchestrator | Saturday 31 May 2025 16:16:36 +0000 (0:00:06.194) 0:03:02.398 ********** 2025-05-31 16:25:43.751267 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.751274 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.751280 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.751287 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:25:43.751293 | orchestrator | 2025-05-31 16:25:43.751300 | orchestrator | TASK [ceph-config : create rados gateway instance directories] ***************** 2025-05-31 16:25:43.751306 | orchestrator | Saturday 31 May 2025 16:16:38 +0000 (0:00:01.242) 0:03:03.641 ********** 2025-05-31 16:25:43.751313 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-31 16:25:43.751320 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-31 16:25:43.751327 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-31 16:25:43.751333 | orchestrator | 2025-05-31 16:25:43.751340 | orchestrator | TASK [ceph-config : generate environment file] ********************************* 2025-05-31 16:25:43.751346 | orchestrator | Saturday 31 May 2025 16:16:39 +0000 (0:00:01.132) 0:03:04.774 ********** 2025-05-31 16:25:43.751353 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-31 16:25:43.751360 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-31 16:25:43.751366 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-31 16:25:43.751373 | orchestrator | 2025-05-31 16:25:43.751379 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-31 16:25:43.751386 | orchestrator | Saturday 31 May 2025 16:16:40 +0000 (0:00:01.372) 0:03:06.147 ********** 2025-05-31 16:25:43.751392 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.751399 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.751406 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.751412 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.751419 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.751425 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.751456 | orchestrator | 2025-05-31 16:25:43.751463 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-31 16:25:43.751470 | orchestrator | Saturday 31 May 2025 16:16:41 +0000 (0:00:00.809) 0:03:06.956 ********** 2025-05-31 16:25:43.751476 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.751483 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.751489 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.751496 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.751502 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.751509 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.751515 | orchestrator | 2025-05-31 16:25:43.751522 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-31 16:25:43.751528 | orchestrator | Saturday 31 May 2025 16:16:42 +0000 (0:00:00.693) 0:03:07.649 ********** 2025-05-31 16:25:43.751535 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.751541 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.751548 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.751554 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.751561 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.751567 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.751574 | orchestrator | 2025-05-31 16:25:43.751580 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-31 16:25:43.751587 | orchestrator | Saturday 31 May 2025 16:16:43 +0000 (0:00:00.906) 0:03:08.556 ********** 2025-05-31 16:25:43.751594 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.751603 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.751610 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.751616 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.751623 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.751629 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.751636 | orchestrator | 2025-05-31 16:25:43.751642 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-31 16:25:43.751649 | orchestrator | Saturday 31 May 2025 16:16:43 +0000 (0:00:00.572) 0:03:09.128 ********** 2025-05-31 16:25:43.751655 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.751662 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.751668 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.751675 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.751681 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.751688 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.751694 | orchestrator | 2025-05-31 16:25:43.751701 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-31 16:25:43.751707 | orchestrator | Saturday 31 May 2025 16:16:44 +0000 (0:00:00.828) 0:03:09.957 ********** 2025-05-31 16:25:43.751714 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.751720 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.751727 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.751733 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.751740 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.751746 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.751752 | orchestrator | 2025-05-31 16:25:43.751759 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-31 16:25:43.751769 | orchestrator | Saturday 31 May 2025 16:16:45 +0000 (0:00:00.642) 0:03:10.600 ********** 2025-05-31 16:25:43.751776 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.751783 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.751789 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.751796 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.751802 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.751808 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.751815 | orchestrator | 2025-05-31 16:25:43.751822 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-31 16:25:43.751833 | orchestrator | Saturday 31 May 2025 16:16:45 +0000 (0:00:00.888) 0:03:11.488 ********** 2025-05-31 16:25:43.751839 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.751846 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.751852 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.751859 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.751865 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.751872 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.751896 | orchestrator | 2025-05-31 16:25:43.751903 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-31 16:25:43.751910 | orchestrator | Saturday 31 May 2025 16:16:46 +0000 (0:00:00.781) 0:03:12.270 ********** 2025-05-31 16:25:43.751917 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.751923 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.751929 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.751936 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.751942 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.751949 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.751955 | orchestrator | 2025-05-31 16:25:43.751962 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-31 16:25:43.751968 | orchestrator | Saturday 31 May 2025 16:16:48 +0000 (0:00:02.186) 0:03:14.456 ********** 2025-05-31 16:25:43.751975 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.751981 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.751988 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.751994 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.752000 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.752007 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.752013 | orchestrator | 2025-05-31 16:25:43.752020 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-31 16:25:43.752027 | orchestrator | Saturday 31 May 2025 16:16:49 +0000 (0:00:00.654) 0:03:15.111 ********** 2025-05-31 16:25:43.752033 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-31 16:25:43.752039 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-31 16:25:43.752046 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.752052 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-31 16:25:43.752059 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-31 16:25:43.752065 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.752071 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-31 16:25:43.752078 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-31 16:25:43.752084 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.752091 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-31 16:25:43.752097 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-31 16:25:43.752103 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.752110 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-31 16:25:43.752116 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-31 16:25:43.752123 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.752129 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-31 16:25:43.752135 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-31 16:25:43.752142 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.752148 | orchestrator | 2025-05-31 16:25:43.752155 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-31 16:25:43.752161 | orchestrator | Saturday 31 May 2025 16:16:50 +0000 (0:00:00.889) 0:03:16.000 ********** 2025-05-31 16:25:43.752168 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-31 16:25:43.752174 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-31 16:25:43.752181 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.752188 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-31 16:25:43.752194 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-31 16:25:43.752208 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.752215 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-31 16:25:43.752222 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-31 16:25:43.752228 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.752234 | orchestrator | ok: [testbed-node-3] => (item=osd memory target) 2025-05-31 16:25:43.752241 | orchestrator | ok: [testbed-node-3] => (item=osd_memory_target) 2025-05-31 16:25:43.752247 | orchestrator | ok: [testbed-node-4] => (item=osd memory target) 2025-05-31 16:25:43.752254 | orchestrator | ok: [testbed-node-4] => (item=osd_memory_target) 2025-05-31 16:25:43.752260 | orchestrator | ok: [testbed-node-5] => (item=osd memory target) 2025-05-31 16:25:43.752267 | orchestrator | ok: [testbed-node-5] => (item=osd_memory_target) 2025-05-31 16:25:43.752273 | orchestrator | 2025-05-31 16:25:43.752280 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-31 16:25:43.752286 | orchestrator | Saturday 31 May 2025 16:16:51 +0000 (0:00:00.660) 0:03:16.660 ********** 2025-05-31 16:25:43.752293 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.752299 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.752306 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.752312 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.752319 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.752325 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.752332 | orchestrator | 2025-05-31 16:25:43.752338 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-31 16:25:43.752344 | orchestrator | Saturday 31 May 2025 16:16:52 +0000 (0:00:01.054) 0:03:17.715 ********** 2025-05-31 16:25:43.752351 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.752361 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.752368 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.752374 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.752381 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.752387 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.752394 | orchestrator | 2025-05-31 16:25:43.752400 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-31 16:25:43.752407 | orchestrator | Saturday 31 May 2025 16:16:52 +0000 (0:00:00.722) 0:03:18.437 ********** 2025-05-31 16:25:43.752414 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.752420 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.752427 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.752433 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.752439 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.752446 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.752452 | orchestrator | 2025-05-31 16:25:43.752459 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-31 16:25:43.752465 | orchestrator | Saturday 31 May 2025 16:16:53 +0000 (0:00:00.888) 0:03:19.326 ********** 2025-05-31 16:25:43.752472 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.752478 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.752485 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.752491 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.752498 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.752504 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.752510 | orchestrator | 2025-05-31 16:25:43.752517 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-31 16:25:43.752524 | orchestrator | Saturday 31 May 2025 16:16:54 +0000 (0:00:00.612) 0:03:19.938 ********** 2025-05-31 16:25:43.752530 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.752537 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.752543 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.752550 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.752556 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.752567 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.752574 | orchestrator | 2025-05-31 16:25:43.752581 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-31 16:25:43.752587 | orchestrator | Saturday 31 May 2025 16:16:55 +0000 (0:00:00.688) 0:03:20.627 ********** 2025-05-31 16:25:43.752594 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.752600 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.752607 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.752613 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.752620 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.752626 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.752633 | orchestrator | 2025-05-31 16:25:43.752639 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-31 16:25:43.752646 | orchestrator | Saturday 31 May 2025 16:16:55 +0000 (0:00:00.638) 0:03:21.265 ********** 2025-05-31 16:25:43.752653 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-31 16:25:43.752659 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-31 16:25:43.752666 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-31 16:25:43.752673 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.752679 | orchestrator | 2025-05-31 16:25:43.752686 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-31 16:25:43.752692 | orchestrator | Saturday 31 May 2025 16:16:56 +0000 (0:00:00.542) 0:03:21.808 ********** 2025-05-31 16:25:43.752699 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-31 16:25:43.752705 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-31 16:25:43.752712 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-31 16:25:43.752718 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.752724 | orchestrator | 2025-05-31 16:25:43.752731 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-31 16:25:43.752738 | orchestrator | Saturday 31 May 2025 16:16:56 +0000 (0:00:00.611) 0:03:22.420 ********** 2025-05-31 16:25:43.752744 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-31 16:25:43.752751 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-31 16:25:43.752757 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-31 16:25:43.752764 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.752770 | orchestrator | 2025-05-31 16:25:43.752780 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-31 16:25:43.752787 | orchestrator | Saturday 31 May 2025 16:16:57 +0000 (0:00:00.339) 0:03:22.760 ********** 2025-05-31 16:25:43.752793 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.752800 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.752806 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.752813 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.752820 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.752826 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.752833 | orchestrator | 2025-05-31 16:25:43.752839 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-31 16:25:43.752846 | orchestrator | Saturday 31 May 2025 16:16:57 +0000 (0:00:00.545) 0:03:23.306 ********** 2025-05-31 16:25:43.752852 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-31 16:25:43.752859 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-31 16:25:43.752866 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.752872 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-31 16:25:43.752892 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.752899 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.752905 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-31 16:25:43.752912 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-31 16:25:43.752918 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-31 16:25:43.752924 | orchestrator | 2025-05-31 16:25:43.752931 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-31 16:25:43.752942 | orchestrator | Saturday 31 May 2025 16:16:59 +0000 (0:00:01.216) 0:03:24.522 ********** 2025-05-31 16:25:43.752948 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.752959 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.752965 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.752972 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.752978 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.752985 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.752991 | orchestrator | 2025-05-31 16:25:43.752998 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-31 16:25:43.753004 | orchestrator | Saturday 31 May 2025 16:16:59 +0000 (0:00:00.578) 0:03:25.101 ********** 2025-05-31 16:25:43.753011 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.753017 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.753024 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.753030 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.753036 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.753043 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.753049 | orchestrator | 2025-05-31 16:25:43.753056 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-31 16:25:43.753062 | orchestrator | Saturday 31 May 2025 16:17:00 +0000 (0:00:00.785) 0:03:25.886 ********** 2025-05-31 16:25:43.753069 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-31 16:25:43.753075 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.753082 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-31 16:25:43.753088 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.753094 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-31 16:25:43.753101 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.753107 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-31 16:25:43.753114 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.753120 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-31 16:25:43.753126 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.753133 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-31 16:25:43.753139 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.753146 | orchestrator | 2025-05-31 16:25:43.753152 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-31 16:25:43.753159 | orchestrator | Saturday 31 May 2025 16:17:01 +0000 (0:00:00.657) 0:03:26.544 ********** 2025-05-31 16:25:43.753165 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.753171 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.753178 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.753184 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-31 16:25:43.753191 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.753197 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-31 16:25:43.753204 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.753211 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-31 16:25:43.753217 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.753223 | orchestrator | 2025-05-31 16:25:43.753230 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-31 16:25:43.753236 | orchestrator | Saturday 31 May 2025 16:17:01 +0000 (0:00:00.659) 0:03:27.203 ********** 2025-05-31 16:25:43.753243 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-31 16:25:43.753249 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-31 16:25:43.753256 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-31 16:25:43.753266 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.753273 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-31 16:25:43.753279 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-31 16:25:43.753286 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-31 16:25:43.753292 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.753299 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-31 16:25:43.753305 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-31 16:25:43.753311 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-31 16:25:43.753318 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.753330 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 16:25:43.753336 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 16:25:43.753343 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-31 16:25:43.753349 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-31 16:25:43.753355 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 16:25:43.753362 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.753368 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-31 16:25:43.753375 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-31 16:25:43.753381 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-31 16:25:43.753387 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.753394 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-31 16:25:43.753401 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.753407 | orchestrator | 2025-05-31 16:25:43.753413 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-31 16:25:43.753420 | orchestrator | Saturday 31 May 2025 16:17:03 +0000 (0:00:01.354) 0:03:28.558 ********** 2025-05-31 16:25:43.753426 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:43.753433 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:25:43.753439 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:25:43.753446 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:25:43.753452 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:25:43.753459 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:25:43.753465 | orchestrator | 2025-05-31 16:25:43.753475 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-31 16:25:43.753482 | orchestrator | Saturday 31 May 2025 16:17:06 +0000 (0:00:03.307) 0:03:31.866 ********** 2025-05-31 16:25:43.753488 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:43.753495 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:25:43.753502 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:25:43.753508 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:25:43.753515 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:25:43.753521 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:25:43.753527 | orchestrator | 2025-05-31 16:25:43.753534 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-05-31 16:25:43.753541 | orchestrator | Saturday 31 May 2025 16:17:07 +0000 (0:00:01.095) 0:03:32.961 ********** 2025-05-31 16:25:43.753547 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.753554 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.753560 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.753567 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:25:43.753573 | orchestrator | 2025-05-31 16:25:43.753580 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-05-31 16:25:43.753586 | orchestrator | Saturday 31 May 2025 16:17:08 +0000 (0:00:00.818) 0:03:33.779 ********** 2025-05-31 16:25:43.753593 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.753599 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.753606 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.753616 | orchestrator | 2025-05-31 16:25:43.753623 | orchestrator | TASK [ceph-handler : set _mon_handler_called before restart] ******************* 2025-05-31 16:25:43.753630 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:25:43.753636 | orchestrator | 2025-05-31 16:25:43.753643 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-05-31 16:25:43.753649 | orchestrator | Saturday 31 May 2025 16:17:09 +0000 (0:00:00.783) 0:03:34.563 ********** 2025-05-31 16:25:43.753656 | orchestrator | 2025-05-31 16:25:43.753662 | orchestrator | TASK [ceph-handler : copy mon restart script] ********************************** 2025-05-31 16:25:43.753669 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 16:25:43.753675 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 16:25:43.753682 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 16:25:43.753689 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.753695 | orchestrator | 2025-05-31 16:25:43.753702 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-05-31 16:25:43.753708 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:43.753715 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:25:43.753721 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:25:43.753728 | orchestrator | 2025-05-31 16:25:43.753734 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-05-31 16:25:43.753741 | orchestrator | Saturday 31 May 2025 16:17:10 +0000 (0:00:01.156) 0:03:35.720 ********** 2025-05-31 16:25:43.753747 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-31 16:25:43.753754 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-31 16:25:43.753760 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-31 16:25:43.753767 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.753773 | orchestrator | 2025-05-31 16:25:43.753780 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-05-31 16:25:43.753786 | orchestrator | Saturday 31 May 2025 16:17:10 +0000 (0:00:00.770) 0:03:36.490 ********** 2025-05-31 16:25:43.753793 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.753799 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.753806 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.753812 | orchestrator | 2025-05-31 16:25:43.753819 | orchestrator | TASK [ceph-handler : set _mon_handler_called after restart] ******************** 2025-05-31 16:25:43.753825 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.753832 | orchestrator | 2025-05-31 16:25:43.753838 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-05-31 16:25:43.753845 | orchestrator | Saturday 31 May 2025 16:17:11 +0000 (0:00:00.651) 0:03:37.142 ********** 2025-05-31 16:25:43.753851 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.753857 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.753864 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.753870 | orchestrator | 2025-05-31 16:25:43.753890 | orchestrator | TASK [ceph-handler : osds handler] ********************************************* 2025-05-31 16:25:43.753900 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.753906 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.753913 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.753919 | orchestrator | 2025-05-31 16:25:43.753926 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-05-31 16:25:43.753932 | orchestrator | Saturday 31 May 2025 16:17:12 +0000 (0:00:00.700) 0:03:37.842 ********** 2025-05-31 16:25:43.753939 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.753945 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.753952 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.753958 | orchestrator | 2025-05-31 16:25:43.753964 | orchestrator | TASK [ceph-handler : mdss handler] ********************************************* 2025-05-31 16:25:43.753971 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.753977 | orchestrator | 2025-05-31 16:25:43.753989 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-05-31 16:25:43.753995 | orchestrator | Saturday 31 May 2025 16:17:12 +0000 (0:00:00.483) 0:03:38.325 ********** 2025-05-31 16:25:43.754002 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.754008 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.754186 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.754296 | orchestrator | 2025-05-31 16:25:43.754313 | orchestrator | TASK [ceph-handler : rgws handler] ********************************************* 2025-05-31 16:25:43.754326 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.754337 | orchestrator | 2025-05-31 16:25:43.754348 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-05-31 16:25:43.754359 | orchestrator | Saturday 31 May 2025 16:17:13 +0000 (0:00:00.781) 0:03:39.107 ********** 2025-05-31 16:25:43.754370 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.754381 | orchestrator | 2025-05-31 16:25:43.754424 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-05-31 16:25:43.754436 | orchestrator | Saturday 31 May 2025 16:17:13 +0000 (0:00:00.115) 0:03:39.222 ********** 2025-05-31 16:25:43.754446 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.754457 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.754469 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.754480 | orchestrator | 2025-05-31 16:25:43.754490 | orchestrator | TASK [ceph-handler : rbdmirrors handler] *************************************** 2025-05-31 16:25:43.754501 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.754511 | orchestrator | 2025-05-31 16:25:43.754522 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-05-31 16:25:43.754532 | orchestrator | Saturday 31 May 2025 16:17:14 +0000 (0:00:00.455) 0:03:39.678 ********** 2025-05-31 16:25:43.754543 | orchestrator | 2025-05-31 16:25:43.754553 | orchestrator | TASK [ceph-handler : mgrs handler] ********************************************* 2025-05-31 16:25:43.754564 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.754575 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:25:43.754585 | orchestrator | 2025-05-31 16:25:43.754596 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-05-31 16:25:43.754607 | orchestrator | Saturday 31 May 2025 16:17:15 +0000 (0:00:00.891) 0:03:40.569 ********** 2025-05-31 16:25:43.754617 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.754628 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.754639 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.754649 | orchestrator | 2025-05-31 16:25:43.754660 | orchestrator | TASK [ceph-handler : set _mgr_handler_called before restart] ******************* 2025-05-31 16:25:43.754671 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 16:25:43.754681 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 16:25:43.754692 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 16:25:43.754703 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.754713 | orchestrator | 2025-05-31 16:25:43.754724 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-31 16:25:43.754735 | orchestrator | Saturday 31 May 2025 16:17:16 +0000 (0:00:00.944) 0:03:41.514 ********** 2025-05-31 16:25:43.754745 | orchestrator | 2025-05-31 16:25:43.754756 | orchestrator | TASK [ceph-handler : copy mgr restart script] ********************************** 2025-05-31 16:25:43.754766 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.754777 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.754787 | orchestrator | 2025-05-31 16:25:43.754798 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-31 16:25:43.754808 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:43.754819 | orchestrator | 2025-05-31 16:25:43.754829 | orchestrator | TASK [ceph-handler : copy mgr restart script] ********************************** 2025-05-31 16:25:43.754840 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.754851 | orchestrator | 2025-05-31 16:25:43.754935 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-31 16:25:43.754950 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:25:43.754961 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:25:43.754971 | orchestrator | 2025-05-31 16:25:43.754982 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-05-31 16:25:43.754993 | orchestrator | Saturday 31 May 2025 16:17:17 +0000 (0:00:01.672) 0:03:43.186 ********** 2025-05-31 16:25:43.755003 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-31 16:25:43.755014 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-31 16:25:43.755025 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-31 16:25:43.755035 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.755046 | orchestrator | 2025-05-31 16:25:43.755057 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-05-31 16:25:43.755074 | orchestrator | Saturday 31 May 2025 16:17:18 +0000 (0:00:00.637) 0:03:43.824 ********** 2025-05-31 16:25:43.755093 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.755111 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.755131 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.755149 | orchestrator | 2025-05-31 16:25:43.755167 | orchestrator | TASK [ceph-handler : set _mgr_handler_called after restart] ******************** 2025-05-31 16:25:43.755178 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.755188 | orchestrator | 2025-05-31 16:25:43.755199 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-05-31 16:25:43.755224 | orchestrator | Saturday 31 May 2025 16:17:19 +0000 (0:00:00.979) 0:03:44.804 ********** 2025-05-31 16:25:43.755235 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:25:43.755246 | orchestrator | 2025-05-31 16:25:43.755258 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-05-31 16:25:43.755276 | orchestrator | Saturday 31 May 2025 16:17:19 +0000 (0:00:00.524) 0:03:45.329 ********** 2025-05-31 16:25:43.755295 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.755313 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.755331 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.755348 | orchestrator | 2025-05-31 16:25:43.755359 | orchestrator | TASK [ceph-handler : rbd-target-api and rbd-target-gw handler] ***************** 2025-05-31 16:25:43.755369 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.755380 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.755390 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.755400 | orchestrator | 2025-05-31 16:25:43.755411 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-05-31 16:25:43.755421 | orchestrator | Saturday 31 May 2025 16:17:20 +0000 (0:00:01.050) 0:03:46.379 ********** 2025-05-31 16:25:43.755432 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:25:43.755442 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:25:43.755452 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:25:43.755463 | orchestrator | 2025-05-31 16:25:43.755473 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-31 16:25:43.755484 | orchestrator | Saturday 31 May 2025 16:17:22 +0000 (0:00:01.298) 0:03:47.678 ********** 2025-05-31 16:25:43.755495 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:43.755519 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:25:43.755530 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:25:43.755541 | orchestrator | 2025-05-31 16:25:43.755552 | orchestrator | TASK [ceph-handler : remove tempdir for scripts] ******************************* 2025-05-31 16:25:43.755562 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 16:25:43.755573 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 16:25:43.755584 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 16:25:43.755594 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.755605 | orchestrator | 2025-05-31 16:25:43.755615 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-05-31 16:25:43.755637 | orchestrator | Saturday 31 May 2025 16:17:23 +0000 (0:00:01.675) 0:03:49.353 ********** 2025-05-31 16:25:43.755647 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.755658 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.755669 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.755679 | orchestrator | 2025-05-31 16:25:43.755689 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-05-31 16:25:43.755700 | orchestrator | Saturday 31 May 2025 16:17:24 +0000 (0:00:00.955) 0:03:50.309 ********** 2025-05-31 16:25:43.755711 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:25:43.755721 | orchestrator | 2025-05-31 16:25:43.755732 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-05-31 16:25:43.755743 | orchestrator | Saturday 31 May 2025 16:17:25 +0000 (0:00:00.517) 0:03:50.827 ********** 2025-05-31 16:25:43.755753 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.755764 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.755774 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.755785 | orchestrator | 2025-05-31 16:25:43.755795 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-05-31 16:25:43.755806 | orchestrator | Saturday 31 May 2025 16:17:25 +0000 (0:00:00.328) 0:03:51.155 ********** 2025-05-31 16:25:43.755816 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:25:43.755827 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:25:43.755837 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:25:43.755848 | orchestrator | 2025-05-31 16:25:43.755859 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-05-31 16:25:43.755869 | orchestrator | Saturday 31 May 2025 16:17:27 +0000 (0:00:01.410) 0:03:52.566 ********** 2025-05-31 16:25:43.755909 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 16:25:43.755930 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 16:25:43.755947 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 16:25:43.755958 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.755969 | orchestrator | 2025-05-31 16:25:43.755979 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-05-31 16:25:43.755991 | orchestrator | Saturday 31 May 2025 16:17:27 +0000 (0:00:00.695) 0:03:53.262 ********** 2025-05-31 16:25:43.756011 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.756030 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.756048 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.756067 | orchestrator | 2025-05-31 16:25:43.756087 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-05-31 16:25:43.756105 | orchestrator | Saturday 31 May 2025 16:17:28 +0000 (0:00:00.343) 0:03:53.605 ********** 2025-05-31 16:25:43.756123 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.756134 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.756145 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.756156 | orchestrator | 2025-05-31 16:25:43.756166 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-05-31 16:25:43.756177 | orchestrator | Saturday 31 May 2025 16:17:28 +0000 (0:00:00.357) 0:03:53.963 ********** 2025-05-31 16:25:43.756188 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.756198 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.756209 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.756220 | orchestrator | 2025-05-31 16:25:43.756230 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-05-31 16:25:43.756241 | orchestrator | Saturday 31 May 2025 16:17:28 +0000 (0:00:00.478) 0:03:54.442 ********** 2025-05-31 16:25:43.756252 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.756262 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.756273 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.756284 | orchestrator | 2025-05-31 16:25:43.756301 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-31 16:25:43.756321 | orchestrator | Saturday 31 May 2025 16:17:29 +0000 (0:00:00.302) 0:03:54.744 ********** 2025-05-31 16:25:43.756332 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:25:43.756342 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:25:43.756353 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:25:43.756364 | orchestrator | 2025-05-31 16:25:43.756374 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-05-31 16:25:43.756385 | orchestrator | 2025-05-31 16:25:43.756396 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-31 16:25:43.756406 | orchestrator | Saturday 31 May 2025 16:17:31 +0000 (0:00:01.875) 0:03:56.620 ********** 2025-05-31 16:25:43.756417 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:25:43.756429 | orchestrator | 2025-05-31 16:25:43.756440 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-31 16:25:43.756450 | orchestrator | Saturday 31 May 2025 16:17:31 +0000 (0:00:00.592) 0:03:57.212 ********** 2025-05-31 16:25:43.756461 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.756471 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.756482 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.756493 | orchestrator | 2025-05-31 16:25:43.756503 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-31 16:25:43.756514 | orchestrator | Saturday 31 May 2025 16:17:32 +0000 (0:00:00.685) 0:03:57.897 ********** 2025-05-31 16:25:43.756525 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.756535 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.756554 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.756566 | orchestrator | 2025-05-31 16:25:43.756576 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-31 16:25:43.756587 | orchestrator | Saturday 31 May 2025 16:17:32 +0000 (0:00:00.274) 0:03:58.172 ********** 2025-05-31 16:25:43.756598 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.756609 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.756619 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.756630 | orchestrator | 2025-05-31 16:25:43.756641 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-31 16:25:43.756651 | orchestrator | Saturday 31 May 2025 16:17:33 +0000 (0:00:00.397) 0:03:58.569 ********** 2025-05-31 16:25:43.756662 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.756673 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.756683 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.756694 | orchestrator | 2025-05-31 16:25:43.756705 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-31 16:25:43.756715 | orchestrator | Saturday 31 May 2025 16:17:33 +0000 (0:00:00.307) 0:03:58.877 ********** 2025-05-31 16:25:43.756726 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.756737 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.756748 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.756758 | orchestrator | 2025-05-31 16:25:43.756769 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-31 16:25:43.756780 | orchestrator | Saturday 31 May 2025 16:17:34 +0000 (0:00:00.724) 0:03:59.601 ********** 2025-05-31 16:25:43.756790 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.756801 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.756812 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.756822 | orchestrator | 2025-05-31 16:25:43.756833 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-31 16:25:43.756848 | orchestrator | Saturday 31 May 2025 16:17:34 +0000 (0:00:00.299) 0:03:59.901 ********** 2025-05-31 16:25:43.756866 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.756904 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.756916 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.756926 | orchestrator | 2025-05-31 16:25:43.756944 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-31 16:25:43.756955 | orchestrator | Saturday 31 May 2025 16:17:34 +0000 (0:00:00.427) 0:04:00.329 ********** 2025-05-31 16:25:43.756966 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.756976 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.756987 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.756997 | orchestrator | 2025-05-31 16:25:43.757007 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-31 16:25:43.757018 | orchestrator | Saturday 31 May 2025 16:17:35 +0000 (0:00:00.260) 0:04:00.589 ********** 2025-05-31 16:25:43.757028 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.757039 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.757049 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.757060 | orchestrator | 2025-05-31 16:25:43.757070 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-31 16:25:43.757081 | orchestrator | Saturday 31 May 2025 16:17:35 +0000 (0:00:00.265) 0:04:00.855 ********** 2025-05-31 16:25:43.757091 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.757102 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.757112 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.757122 | orchestrator | 2025-05-31 16:25:43.757133 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-31 16:25:43.757143 | orchestrator | Saturday 31 May 2025 16:17:35 +0000 (0:00:00.258) 0:04:01.114 ********** 2025-05-31 16:25:43.757154 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.757164 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.757175 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.757185 | orchestrator | 2025-05-31 16:25:43.757196 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-31 16:25:43.757206 | orchestrator | Saturday 31 May 2025 16:17:36 +0000 (0:00:00.870) 0:04:01.984 ********** 2025-05-31 16:25:43.757217 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.757228 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.757238 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.757248 | orchestrator | 2025-05-31 16:25:43.757259 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-31 16:25:43.757269 | orchestrator | Saturday 31 May 2025 16:17:36 +0000 (0:00:00.289) 0:04:02.273 ********** 2025-05-31 16:25:43.757280 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.757290 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.757313 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.757324 | orchestrator | 2025-05-31 16:25:43.757335 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-31 16:25:43.757345 | orchestrator | Saturday 31 May 2025 16:17:37 +0000 (0:00:00.318) 0:04:02.592 ********** 2025-05-31 16:25:43.757356 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.757366 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.757377 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.757387 | orchestrator | 2025-05-31 16:25:43.757398 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-31 16:25:43.757408 | orchestrator | Saturday 31 May 2025 16:17:37 +0000 (0:00:00.269) 0:04:02.861 ********** 2025-05-31 16:25:43.757419 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.757429 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.757440 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.757450 | orchestrator | 2025-05-31 16:25:43.757460 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-31 16:25:43.757472 | orchestrator | Saturday 31 May 2025 16:17:37 +0000 (0:00:00.473) 0:04:03.334 ********** 2025-05-31 16:25:43.757490 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.757509 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.757529 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.757547 | orchestrator | 2025-05-31 16:25:43.757567 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-31 16:25:43.757595 | orchestrator | Saturday 31 May 2025 16:17:38 +0000 (0:00:00.366) 0:04:03.700 ********** 2025-05-31 16:25:43.757614 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.757632 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.757651 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.757662 | orchestrator | 2025-05-31 16:25:43.757673 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-31 16:25:43.757684 | orchestrator | Saturday 31 May 2025 16:17:38 +0000 (0:00:00.419) 0:04:04.120 ********** 2025-05-31 16:25:43.757694 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.757705 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.757715 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.757726 | orchestrator | 2025-05-31 16:25:43.757736 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-31 16:25:43.757747 | orchestrator | Saturday 31 May 2025 16:17:38 +0000 (0:00:00.350) 0:04:04.471 ********** 2025-05-31 16:25:43.757758 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.757768 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.757779 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.757789 | orchestrator | 2025-05-31 16:25:43.757800 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-31 16:25:43.757811 | orchestrator | Saturday 31 May 2025 16:17:39 +0000 (0:00:00.493) 0:04:04.964 ********** 2025-05-31 16:25:43.757822 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.757832 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.757843 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.757853 | orchestrator | 2025-05-31 16:25:43.757864 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-31 16:25:43.757875 | orchestrator | Saturday 31 May 2025 16:17:39 +0000 (0:00:00.275) 0:04:05.240 ********** 2025-05-31 16:25:43.757910 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.757921 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.757931 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.757942 | orchestrator | 2025-05-31 16:25:43.757952 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-31 16:25:43.757963 | orchestrator | Saturday 31 May 2025 16:17:40 +0000 (0:00:00.262) 0:04:05.502 ********** 2025-05-31 16:25:43.757974 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.757984 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.757995 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.758005 | orchestrator | 2025-05-31 16:25:43.758082 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-31 16:25:43.758109 | orchestrator | Saturday 31 May 2025 16:17:40 +0000 (0:00:00.254) 0:04:05.757 ********** 2025-05-31 16:25:43.758128 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.758147 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.758165 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.758184 | orchestrator | 2025-05-31 16:25:43.758201 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-31 16:25:43.758220 | orchestrator | Saturday 31 May 2025 16:17:40 +0000 (0:00:00.468) 0:04:06.226 ********** 2025-05-31 16:25:43.758238 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.758253 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.758264 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.758275 | orchestrator | 2025-05-31 16:25:43.758285 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-31 16:25:43.758296 | orchestrator | Saturday 31 May 2025 16:17:41 +0000 (0:00:00.325) 0:04:06.551 ********** 2025-05-31 16:25:43.758306 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.758317 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.758327 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.758344 | orchestrator | 2025-05-31 16:25:43.758363 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-31 16:25:43.758376 | orchestrator | Saturday 31 May 2025 16:17:41 +0000 (0:00:00.423) 0:04:06.975 ********** 2025-05-31 16:25:43.758396 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.758406 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.758421 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.758438 | orchestrator | 2025-05-31 16:25:43.758457 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-31 16:25:43.758475 | orchestrator | Saturday 31 May 2025 16:17:41 +0000 (0:00:00.323) 0:04:07.298 ********** 2025-05-31 16:25:43.758490 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.758501 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.758511 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.758522 | orchestrator | 2025-05-31 16:25:43.758532 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-31 16:25:43.758543 | orchestrator | Saturday 31 May 2025 16:17:42 +0000 (0:00:00.492) 0:04:07.790 ********** 2025-05-31 16:25:43.758554 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.758571 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.758582 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.758593 | orchestrator | 2025-05-31 16:25:43.758603 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-31 16:25:43.758614 | orchestrator | Saturday 31 May 2025 16:17:42 +0000 (0:00:00.351) 0:04:08.142 ********** 2025-05-31 16:25:43.758624 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.758635 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.758645 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.758655 | orchestrator | 2025-05-31 16:25:43.758666 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-31 16:25:43.758677 | orchestrator | Saturday 31 May 2025 16:17:42 +0000 (0:00:00.317) 0:04:08.460 ********** 2025-05-31 16:25:43.758687 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.758698 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.758708 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.758719 | orchestrator | 2025-05-31 16:25:43.758729 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-31 16:25:43.758740 | orchestrator | Saturday 31 May 2025 16:17:43 +0000 (0:00:00.301) 0:04:08.761 ********** 2025-05-31 16:25:43.758751 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.758762 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.758772 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.758783 | orchestrator | 2025-05-31 16:25:43.758793 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-31 16:25:43.758828 | orchestrator | Saturday 31 May 2025 16:17:43 +0000 (0:00:00.446) 0:04:09.208 ********** 2025-05-31 16:25:43.758847 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.758866 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.758921 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.758940 | orchestrator | 2025-05-31 16:25:43.758959 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-31 16:25:43.758977 | orchestrator | Saturday 31 May 2025 16:17:43 +0000 (0:00:00.283) 0:04:09.491 ********** 2025-05-31 16:25:43.758996 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-31 16:25:43.759015 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-31 16:25:43.759028 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.759038 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-31 16:25:43.759049 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-31 16:25:43.759059 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.759070 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-31 16:25:43.759080 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-31 16:25:43.759091 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.759101 | orchestrator | 2025-05-31 16:25:43.759112 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-31 16:25:43.759131 | orchestrator | Saturday 31 May 2025 16:17:44 +0000 (0:00:00.374) 0:04:09.866 ********** 2025-05-31 16:25:43.759142 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-31 16:25:43.759153 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-31 16:25:43.759164 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.759174 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-31 16:25:43.759185 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-31 16:25:43.759195 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.759206 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-31 16:25:43.759216 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-31 16:25:43.759227 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.759237 | orchestrator | 2025-05-31 16:25:43.759248 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-31 16:25:43.759258 | orchestrator | Saturday 31 May 2025 16:17:44 +0000 (0:00:00.361) 0:04:10.228 ********** 2025-05-31 16:25:43.759269 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.759280 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.759290 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.759301 | orchestrator | 2025-05-31 16:25:43.759311 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-31 16:25:43.759322 | orchestrator | Saturday 31 May 2025 16:17:45 +0000 (0:00:00.416) 0:04:10.644 ********** 2025-05-31 16:25:43.759332 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.759343 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.759353 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.759364 | orchestrator | 2025-05-31 16:25:43.759375 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-31 16:25:43.759385 | orchestrator | Saturday 31 May 2025 16:17:45 +0000 (0:00:00.230) 0:04:10.874 ********** 2025-05-31 16:25:43.759396 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.759406 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.759417 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.759427 | orchestrator | 2025-05-31 16:25:43.759438 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-31 16:25:43.759449 | orchestrator | Saturday 31 May 2025 16:17:45 +0000 (0:00:00.266) 0:04:11.140 ********** 2025-05-31 16:25:43.759460 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.759470 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.759481 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.759491 | orchestrator | 2025-05-31 16:25:43.759502 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-31 16:25:43.759512 | orchestrator | Saturday 31 May 2025 16:17:45 +0000 (0:00:00.248) 0:04:11.389 ********** 2025-05-31 16:25:43.759523 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.759534 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.759544 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.759555 | orchestrator | 2025-05-31 16:25:43.759565 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-31 16:25:43.759617 | orchestrator | Saturday 31 May 2025 16:17:46 +0000 (0:00:00.395) 0:04:11.785 ********** 2025-05-31 16:25:43.759629 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.759640 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.759650 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.759661 | orchestrator | 2025-05-31 16:25:43.759671 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-31 16:25:43.759682 | orchestrator | Saturday 31 May 2025 16:17:46 +0000 (0:00:00.268) 0:04:12.053 ********** 2025-05-31 16:25:43.759693 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-31 16:25:43.759703 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-31 16:25:43.759721 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-31 16:25:43.759732 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.759742 | orchestrator | 2025-05-31 16:25:43.759753 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-31 16:25:43.759763 | orchestrator | Saturday 31 May 2025 16:17:46 +0000 (0:00:00.378) 0:04:12.432 ********** 2025-05-31 16:25:43.759774 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-31 16:25:43.759784 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-31 16:25:43.759795 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-31 16:25:43.759806 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.759816 | orchestrator | 2025-05-31 16:25:43.759827 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-31 16:25:43.759838 | orchestrator | Saturday 31 May 2025 16:17:47 +0000 (0:00:00.332) 0:04:12.764 ********** 2025-05-31 16:25:43.759865 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-31 16:25:43.759912 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-31 16:25:43.759934 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-31 16:25:43.759947 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.759958 | orchestrator | 2025-05-31 16:25:43.759968 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-31 16:25:43.759979 | orchestrator | Saturday 31 May 2025 16:17:47 +0000 (0:00:00.393) 0:04:13.158 ********** 2025-05-31 16:25:43.759990 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.760000 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.760010 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.760021 | orchestrator | 2025-05-31 16:25:43.760031 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-31 16:25:43.760042 | orchestrator | Saturday 31 May 2025 16:17:48 +0000 (0:00:00.496) 0:04:13.654 ********** 2025-05-31 16:25:43.760053 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-31 16:25:43.760063 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.760073 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-31 16:25:43.760084 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.760094 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-31 16:25:43.760105 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.760115 | orchestrator | 2025-05-31 16:25:43.760126 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-31 16:25:43.760137 | orchestrator | Saturday 31 May 2025 16:17:48 +0000 (0:00:00.440) 0:04:14.095 ********** 2025-05-31 16:25:43.760147 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.760158 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.760168 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.760178 | orchestrator | 2025-05-31 16:25:43.760189 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-31 16:25:43.760199 | orchestrator | Saturday 31 May 2025 16:17:48 +0000 (0:00:00.342) 0:04:14.438 ********** 2025-05-31 16:25:43.760210 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.760220 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.760231 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.760241 | orchestrator | 2025-05-31 16:25:43.760251 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-31 16:25:43.760262 | orchestrator | Saturday 31 May 2025 16:17:49 +0000 (0:00:00.292) 0:04:14.730 ********** 2025-05-31 16:25:43.760273 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-31 16:25:43.760283 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.760293 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-31 16:25:43.760304 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.760314 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-31 16:25:43.760325 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.760342 | orchestrator | 2025-05-31 16:25:43.760353 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-31 16:25:43.760364 | orchestrator | Saturday 31 May 2025 16:17:50 +0000 (0:00:00.901) 0:04:15.632 ********** 2025-05-31 16:25:43.760374 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.760384 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.760395 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.760405 | orchestrator | 2025-05-31 16:25:43.760416 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-31 16:25:43.760426 | orchestrator | Saturday 31 May 2025 16:17:50 +0000 (0:00:00.340) 0:04:15.972 ********** 2025-05-31 16:25:43.760437 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-31 16:25:43.760447 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-31 16:25:43.760458 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-31 16:25:43.760472 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.760491 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-31 16:25:43.760505 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-31 16:25:43.760515 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-31 16:25:43.760526 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.760537 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-31 16:25:43.760548 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-31 16:25:43.760563 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-31 16:25:43.760574 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.760585 | orchestrator | 2025-05-31 16:25:43.760595 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-31 16:25:43.760606 | orchestrator | Saturday 31 May 2025 16:17:50 +0000 (0:00:00.519) 0:04:16.492 ********** 2025-05-31 16:25:43.760617 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.760627 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.760638 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.760648 | orchestrator | 2025-05-31 16:25:43.760659 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-31 16:25:43.760670 | orchestrator | Saturday 31 May 2025 16:17:51 +0000 (0:00:00.628) 0:04:17.120 ********** 2025-05-31 16:25:43.760680 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.760691 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.760701 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.760711 | orchestrator | 2025-05-31 16:25:43.760722 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-31 16:25:43.760732 | orchestrator | Saturday 31 May 2025 16:17:52 +0000 (0:00:00.490) 0:04:17.611 ********** 2025-05-31 16:25:43.760743 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.760753 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.760764 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.760775 | orchestrator | 2025-05-31 16:25:43.760786 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-31 16:25:43.760796 | orchestrator | Saturday 31 May 2025 16:17:52 +0000 (0:00:00.667) 0:04:18.279 ********** 2025-05-31 16:25:43.760807 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.760817 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.760834 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.760846 | orchestrator | 2025-05-31 16:25:43.760856 | orchestrator | TASK [ceph-mon : set_fact container_exec_cmd] ********************************** 2025-05-31 16:25:43.760867 | orchestrator | Saturday 31 May 2025 16:17:53 +0000 (0:00:00.484) 0:04:18.764 ********** 2025-05-31 16:25:43.760921 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.760935 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.760946 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.760957 | orchestrator | 2025-05-31 16:25:43.760967 | orchestrator | TASK [ceph-mon : include deploy_monitors.yml] ********************************** 2025-05-31 16:25:43.760986 | orchestrator | Saturday 31 May 2025 16:17:53 +0000 (0:00:00.454) 0:04:19.218 ********** 2025-05-31 16:25:43.760997 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:25:43.761008 | orchestrator | 2025-05-31 16:25:43.761018 | orchestrator | TASK [ceph-mon : check if monitor initial keyring already exists] ************** 2025-05-31 16:25:43.761029 | orchestrator | Saturday 31 May 2025 16:17:54 +0000 (0:00:00.543) 0:04:19.762 ********** 2025-05-31 16:25:43.761039 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.761050 | orchestrator | 2025-05-31 16:25:43.761061 | orchestrator | TASK [ceph-mon : generate monitor initial keyring] ***************************** 2025-05-31 16:25:43.761071 | orchestrator | Saturday 31 May 2025 16:17:54 +0000 (0:00:00.137) 0:04:19.899 ********** 2025-05-31 16:25:43.761082 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-31 16:25:43.761092 | orchestrator | 2025-05-31 16:25:43.761103 | orchestrator | TASK [ceph-mon : set_fact _initial_mon_key_success] **************************** 2025-05-31 16:25:43.761113 | orchestrator | Saturday 31 May 2025 16:17:55 +0000 (0:00:00.608) 0:04:20.507 ********** 2025-05-31 16:25:43.761123 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.761134 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.761144 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.761155 | orchestrator | 2025-05-31 16:25:43.761165 | orchestrator | TASK [ceph-mon : get initial keyring when it already exists] ******************* 2025-05-31 16:25:43.761176 | orchestrator | Saturday 31 May 2025 16:17:55 +0000 (0:00:00.461) 0:04:20.968 ********** 2025-05-31 16:25:43.761186 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.761197 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.761207 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.761217 | orchestrator | 2025-05-31 16:25:43.761228 | orchestrator | TASK [ceph-mon : create monitor initial keyring] ******************************* 2025-05-31 16:25:43.761238 | orchestrator | Saturday 31 May 2025 16:17:55 +0000 (0:00:00.302) 0:04:21.270 ********** 2025-05-31 16:25:43.761249 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:43.761259 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:25:43.761270 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:25:43.761280 | orchestrator | 2025-05-31 16:25:43.761291 | orchestrator | TASK [ceph-mon : copy the initial key in /etc/ceph (for containers)] *********** 2025-05-31 16:25:43.761301 | orchestrator | Saturday 31 May 2025 16:17:56 +0000 (0:00:01.135) 0:04:22.406 ********** 2025-05-31 16:25:43.761312 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:43.761322 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:25:43.761332 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:25:43.761343 | orchestrator | 2025-05-31 16:25:43.761353 | orchestrator | TASK [ceph-mon : create monitor directory] ************************************* 2025-05-31 16:25:43.761373 | orchestrator | Saturday 31 May 2025 16:17:57 +0000 (0:00:00.692) 0:04:23.099 ********** 2025-05-31 16:25:43.761385 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:43.761395 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:25:43.761406 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:25:43.761416 | orchestrator | 2025-05-31 16:25:43.761427 | orchestrator | TASK [ceph-mon : recursively fix ownership of monitor directory] *************** 2025-05-31 16:25:43.761438 | orchestrator | Saturday 31 May 2025 16:17:58 +0000 (0:00:00.786) 0:04:23.885 ********** 2025-05-31 16:25:43.761448 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.761459 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.761469 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.761480 | orchestrator | 2025-05-31 16:25:43.761490 | orchestrator | TASK [ceph-mon : create custom admin keyring] ********************************** 2025-05-31 16:25:43.761501 | orchestrator | Saturday 31 May 2025 16:17:58 +0000 (0:00:00.584) 0:04:24.469 ********** 2025-05-31 16:25:43.761511 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.761522 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.761532 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.761543 | orchestrator | 2025-05-31 16:25:43.761560 | orchestrator | TASK [ceph-mon : set_fact ceph-authtool container command] ********************* 2025-05-31 16:25:43.761576 | orchestrator | Saturday 31 May 2025 16:17:59 +0000 (0:00:00.264) 0:04:24.734 ********** 2025-05-31 16:25:43.761587 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.761597 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.761608 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.761618 | orchestrator | 2025-05-31 16:25:43.761629 | orchestrator | TASK [ceph-mon : import admin keyring into mon keyring] ************************ 2025-05-31 16:25:43.761640 | orchestrator | Saturday 31 May 2025 16:17:59 +0000 (0:00:00.270) 0:04:25.004 ********** 2025-05-31 16:25:43.761650 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.761661 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.761672 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.761682 | orchestrator | 2025-05-31 16:25:43.761693 | orchestrator | TASK [ceph-mon : set_fact ceph-mon container command] ************************** 2025-05-31 16:25:43.761703 | orchestrator | Saturday 31 May 2025 16:17:59 +0000 (0:00:00.429) 0:04:25.434 ********** 2025-05-31 16:25:43.761714 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.761724 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.761734 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.761745 | orchestrator | 2025-05-31 16:25:43.761755 | orchestrator | TASK [ceph-mon : ceph monitor mkfs with keyring] ******************************* 2025-05-31 16:25:43.761766 | orchestrator | Saturday 31 May 2025 16:18:00 +0000 (0:00:00.279) 0:04:25.714 ********** 2025-05-31 16:25:43.761777 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:43.761788 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:25:43.761798 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:25:43.761809 | orchestrator | 2025-05-31 16:25:43.761819 | orchestrator | TASK [ceph-mon : ceph monitor mkfs without keyring] **************************** 2025-05-31 16:25:43.761830 | orchestrator | Saturday 31 May 2025 16:18:01 +0000 (0:00:01.202) 0:04:26.917 ********** 2025-05-31 16:25:43.761847 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.761858 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.761869 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.762259 | orchestrator | 2025-05-31 16:25:43.762297 | orchestrator | TASK [ceph-mon : include start_monitor.yml] ************************************ 2025-05-31 16:25:43.762306 | orchestrator | Saturday 31 May 2025 16:18:01 +0000 (0:00:00.422) 0:04:27.340 ********** 2025-05-31 16:25:43.762315 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:25:43.762323 | orchestrator | 2025-05-31 16:25:43.762331 | orchestrator | TASK [ceph-mon : ensure systemd service override directory exists] ************* 2025-05-31 16:25:43.762339 | orchestrator | Saturday 31 May 2025 16:18:02 +0000 (0:00:00.546) 0:04:27.887 ********** 2025-05-31 16:25:43.762346 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.762354 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.762362 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.762370 | orchestrator | 2025-05-31 16:25:43.762377 | orchestrator | TASK [ceph-mon : add ceph-mon systemd service overrides] *********************** 2025-05-31 16:25:43.762385 | orchestrator | Saturday 31 May 2025 16:18:02 +0000 (0:00:00.364) 0:04:28.251 ********** 2025-05-31 16:25:43.762393 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.762400 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.762408 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.762416 | orchestrator | 2025-05-31 16:25:43.762423 | orchestrator | TASK [ceph-mon : include_tasks systemd.yml] ************************************ 2025-05-31 16:25:43.762431 | orchestrator | Saturday 31 May 2025 16:18:03 +0000 (0:00:00.591) 0:04:28.843 ********** 2025-05-31 16:25:43.762439 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:25:43.762447 | orchestrator | 2025-05-31 16:25:43.762455 | orchestrator | TASK [ceph-mon : generate systemd unit file for mon container] ***************** 2025-05-31 16:25:43.762462 | orchestrator | Saturday 31 May 2025 16:18:03 +0000 (0:00:00.646) 0:04:29.489 ********** 2025-05-31 16:25:43.762480 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:43.762488 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:25:43.762496 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:25:43.762503 | orchestrator | 2025-05-31 16:25:43.762511 | orchestrator | TASK [ceph-mon : generate systemd ceph-mon target file] ************************ 2025-05-31 16:25:43.762519 | orchestrator | Saturday 31 May 2025 16:18:05 +0000 (0:00:01.257) 0:04:30.747 ********** 2025-05-31 16:25:43.762526 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:43.762534 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:25:43.762542 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:25:43.762549 | orchestrator | 2025-05-31 16:25:43.762557 | orchestrator | TASK [ceph-mon : enable ceph-mon.target] *************************************** 2025-05-31 16:25:43.762565 | orchestrator | Saturday 31 May 2025 16:18:06 +0000 (0:00:01.122) 0:04:31.869 ********** 2025-05-31 16:25:43.762573 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:43.762580 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:25:43.762588 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:25:43.762596 | orchestrator | 2025-05-31 16:25:43.762603 | orchestrator | TASK [ceph-mon : start the monitor service] ************************************ 2025-05-31 16:25:43.762611 | orchestrator | Saturday 31 May 2025 16:18:07 +0000 (0:00:01.599) 0:04:33.468 ********** 2025-05-31 16:25:43.762619 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:43.762627 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:25:43.762634 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:25:43.762642 | orchestrator | 2025-05-31 16:25:43.762650 | orchestrator | TASK [ceph-mon : include_tasks ceph_keys.yml] ********************************** 2025-05-31 16:25:43.762657 | orchestrator | Saturday 31 May 2025 16:18:09 +0000 (0:00:01.883) 0:04:35.351 ********** 2025-05-31 16:25:43.762665 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:25:43.762673 | orchestrator | 2025-05-31 16:25:43.762680 | orchestrator | TASK [ceph-mon : waiting for the monitor(s) to form the quorum...] ************* 2025-05-31 16:25:43.762688 | orchestrator | Saturday 31 May 2025 16:18:10 +0000 (0:00:00.675) 0:04:36.027 ********** 2025-05-31 16:25:43.762696 | orchestrator | FAILED - RETRYING: [testbed-node-0]: waiting for the monitor(s) to form the quorum... (10 retries left). 2025-05-31 16:25:43.762704 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.762712 | orchestrator | 2025-05-31 16:25:43.762725 | orchestrator | TASK [ceph-mon : fetch ceph initial keys] ************************************** 2025-05-31 16:25:43.762733 | orchestrator | Saturday 31 May 2025 16:18:32 +0000 (0:00:21.485) 0:04:57.513 ********** 2025-05-31 16:25:43.762741 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.762749 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.762756 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.762764 | orchestrator | 2025-05-31 16:25:43.762772 | orchestrator | TASK [ceph-mon : include secure_cluster.yml] *********************************** 2025-05-31 16:25:43.762779 | orchestrator | Saturday 31 May 2025 16:18:39 +0000 (0:00:07.499) 0:05:05.013 ********** 2025-05-31 16:25:43.762788 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.762795 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.762803 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.762810 | orchestrator | 2025-05-31 16:25:43.762818 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-31 16:25:43.762826 | orchestrator | Saturday 31 May 2025 16:18:40 +0000 (0:00:01.121) 0:05:06.135 ********** 2025-05-31 16:25:43.762834 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:43.762841 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:25:43.762849 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:25:43.762857 | orchestrator | 2025-05-31 16:25:43.762866 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-05-31 16:25:43.762875 | orchestrator | Saturday 31 May 2025 16:18:41 +0000 (0:00:00.725) 0:05:06.860 ********** 2025-05-31 16:25:43.762908 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:25:43.762922 | orchestrator | 2025-05-31 16:25:43.762931 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-05-31 16:25:43.762976 | orchestrator | Saturday 31 May 2025 16:18:42 +0000 (0:00:00.953) 0:05:07.814 ********** 2025-05-31 16:25:43.762986 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.762995 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.763004 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.763012 | orchestrator | 2025-05-31 16:25:43.763021 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-05-31 16:25:43.763030 | orchestrator | Saturday 31 May 2025 16:18:42 +0000 (0:00:00.425) 0:05:08.239 ********** 2025-05-31 16:25:43.763039 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:43.763047 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:25:43.763056 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:25:43.763065 | orchestrator | 2025-05-31 16:25:43.763073 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-05-31 16:25:43.763082 | orchestrator | Saturday 31 May 2025 16:18:43 +0000 (0:00:01.249) 0:05:09.488 ********** 2025-05-31 16:25:43.763091 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-31 16:25:43.763100 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-31 16:25:43.763109 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-31 16:25:43.763117 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.763125 | orchestrator | 2025-05-31 16:25:43.763134 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-05-31 16:25:43.763144 | orchestrator | Saturday 31 May 2025 16:18:45 +0000 (0:00:01.167) 0:05:10.656 ********** 2025-05-31 16:25:43.763152 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.763161 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.763169 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.763178 | orchestrator | 2025-05-31 16:25:43.763186 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-31 16:25:43.763195 | orchestrator | Saturday 31 May 2025 16:18:45 +0000 (0:00:00.360) 0:05:11.017 ********** 2025-05-31 16:25:43.763204 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:43.763213 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:25:43.763221 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:25:43.763229 | orchestrator | 2025-05-31 16:25:43.763237 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-05-31 16:25:43.763245 | orchestrator | 2025-05-31 16:25:43.763252 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-31 16:25:43.763260 | orchestrator | Saturday 31 May 2025 16:18:47 +0000 (0:00:02.114) 0:05:13.132 ********** 2025-05-31 16:25:43.763268 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:25:43.763277 | orchestrator | 2025-05-31 16:25:43.763284 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-31 16:25:43.763292 | orchestrator | Saturday 31 May 2025 16:18:48 +0000 (0:00:00.761) 0:05:13.893 ********** 2025-05-31 16:25:43.763300 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.763307 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.763315 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.763323 | orchestrator | 2025-05-31 16:25:43.763330 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-31 16:25:43.763338 | orchestrator | Saturday 31 May 2025 16:18:49 +0000 (0:00:00.750) 0:05:14.644 ********** 2025-05-31 16:25:43.763346 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.763353 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.763361 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.763369 | orchestrator | 2025-05-31 16:25:43.763376 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-31 16:25:43.763384 | orchestrator | Saturday 31 May 2025 16:18:49 +0000 (0:00:00.326) 0:05:14.970 ********** 2025-05-31 16:25:43.763402 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.763410 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.763418 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.763425 | orchestrator | 2025-05-31 16:25:43.763433 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-31 16:25:43.763441 | orchestrator | Saturday 31 May 2025 16:18:50 +0000 (0:00:00.569) 0:05:15.540 ********** 2025-05-31 16:25:43.763448 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.763456 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.763464 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.763471 | orchestrator | 2025-05-31 16:25:43.763479 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-31 16:25:43.763491 | orchestrator | Saturday 31 May 2025 16:18:50 +0000 (0:00:00.481) 0:05:16.021 ********** 2025-05-31 16:25:43.763499 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.763506 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.763514 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.763522 | orchestrator | 2025-05-31 16:25:43.763530 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-31 16:25:43.763537 | orchestrator | Saturday 31 May 2025 16:18:51 +0000 (0:00:00.770) 0:05:16.792 ********** 2025-05-31 16:25:43.763545 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.763552 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.763560 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.763568 | orchestrator | 2025-05-31 16:25:43.763575 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-31 16:25:43.763583 | orchestrator | Saturday 31 May 2025 16:18:51 +0000 (0:00:00.268) 0:05:17.061 ********** 2025-05-31 16:25:43.763591 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.763598 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.763606 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.763613 | orchestrator | 2025-05-31 16:25:43.763621 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-31 16:25:43.763629 | orchestrator | Saturday 31 May 2025 16:18:51 +0000 (0:00:00.370) 0:05:17.431 ********** 2025-05-31 16:25:43.763637 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.763644 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.763652 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.763660 | orchestrator | 2025-05-31 16:25:43.763667 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-31 16:25:43.763675 | orchestrator | Saturday 31 May 2025 16:18:52 +0000 (0:00:00.254) 0:05:17.686 ********** 2025-05-31 16:25:43.763705 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.763714 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.763722 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.763730 | orchestrator | 2025-05-31 16:25:43.763737 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-31 16:25:43.763745 | orchestrator | Saturday 31 May 2025 16:18:52 +0000 (0:00:00.242) 0:05:17.929 ********** 2025-05-31 16:25:43.763753 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.763760 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.763768 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.763776 | orchestrator | 2025-05-31 16:25:43.763783 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-31 16:25:43.763791 | orchestrator | Saturday 31 May 2025 16:18:52 +0000 (0:00:00.268) 0:05:18.198 ********** 2025-05-31 16:25:43.763798 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.763806 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.763814 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.763821 | orchestrator | 2025-05-31 16:25:43.763829 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-31 16:25:43.763837 | orchestrator | Saturday 31 May 2025 16:18:53 +0000 (0:00:00.772) 0:05:18.970 ********** 2025-05-31 16:25:43.763844 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.763857 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.763865 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.763873 | orchestrator | 2025-05-31 16:25:43.763898 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-31 16:25:43.763906 | orchestrator | Saturday 31 May 2025 16:18:53 +0000 (0:00:00.239) 0:05:19.210 ********** 2025-05-31 16:25:43.763913 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.763921 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.763929 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.763936 | orchestrator | 2025-05-31 16:25:43.763944 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-31 16:25:43.763952 | orchestrator | Saturday 31 May 2025 16:18:54 +0000 (0:00:00.318) 0:05:19.529 ********** 2025-05-31 16:25:43.763959 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.763967 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.763975 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.763982 | orchestrator | 2025-05-31 16:25:43.763991 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-31 16:25:43.763998 | orchestrator | Saturday 31 May 2025 16:18:54 +0000 (0:00:00.306) 0:05:19.835 ********** 2025-05-31 16:25:43.764006 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.764013 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.764021 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.764029 | orchestrator | 2025-05-31 16:25:43.764037 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-31 16:25:43.764044 | orchestrator | Saturday 31 May 2025 16:18:54 +0000 (0:00:00.434) 0:05:20.270 ********** 2025-05-31 16:25:43.764052 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.764060 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.764068 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.764075 | orchestrator | 2025-05-31 16:25:43.764083 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-31 16:25:43.764091 | orchestrator | Saturday 31 May 2025 16:18:55 +0000 (0:00:00.268) 0:05:20.538 ********** 2025-05-31 16:25:43.764099 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.764106 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.764114 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.764122 | orchestrator | 2025-05-31 16:25:43.764129 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-31 16:25:43.764137 | orchestrator | Saturday 31 May 2025 16:18:55 +0000 (0:00:00.296) 0:05:20.835 ********** 2025-05-31 16:25:43.764145 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.764152 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.764160 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.764167 | orchestrator | 2025-05-31 16:25:43.764175 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-31 16:25:43.764183 | orchestrator | Saturday 31 May 2025 16:18:55 +0000 (0:00:00.276) 0:05:21.112 ********** 2025-05-31 16:25:43.764191 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.764198 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.764206 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.764214 | orchestrator | 2025-05-31 16:25:43.764222 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-31 16:25:43.764233 | orchestrator | Saturday 31 May 2025 16:18:56 +0000 (0:00:00.465) 0:05:21.577 ********** 2025-05-31 16:25:43.764241 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.764249 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.764256 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.764264 | orchestrator | 2025-05-31 16:25:43.764272 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-31 16:25:43.764280 | orchestrator | Saturday 31 May 2025 16:18:56 +0000 (0:00:00.287) 0:05:21.865 ********** 2025-05-31 16:25:43.764287 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.764295 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.764303 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.764315 | orchestrator | 2025-05-31 16:25:43.764323 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-31 16:25:43.764331 | orchestrator | Saturday 31 May 2025 16:18:56 +0000 (0:00:00.279) 0:05:22.144 ********** 2025-05-31 16:25:43.764339 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.764346 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.764354 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.764362 | orchestrator | 2025-05-31 16:25:43.764370 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-31 16:25:43.764377 | orchestrator | Saturday 31 May 2025 16:18:56 +0000 (0:00:00.300) 0:05:22.445 ********** 2025-05-31 16:25:43.764385 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.764393 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.764400 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.764408 | orchestrator | 2025-05-31 16:25:43.764416 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-31 16:25:43.764423 | orchestrator | Saturday 31 May 2025 16:18:57 +0000 (0:00:00.438) 0:05:22.884 ********** 2025-05-31 16:25:43.764453 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.764462 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.764470 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.764478 | orchestrator | 2025-05-31 16:25:43.764485 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-31 16:25:43.764493 | orchestrator | Saturday 31 May 2025 16:18:57 +0000 (0:00:00.297) 0:05:23.181 ********** 2025-05-31 16:25:43.764501 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.764509 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.764516 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.764524 | orchestrator | 2025-05-31 16:25:43.764532 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-31 16:25:43.764539 | orchestrator | Saturday 31 May 2025 16:18:57 +0000 (0:00:00.306) 0:05:23.488 ********** 2025-05-31 16:25:43.764547 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.764555 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.764562 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.764570 | orchestrator | 2025-05-31 16:25:43.764578 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-31 16:25:43.764585 | orchestrator | Saturday 31 May 2025 16:18:58 +0000 (0:00:00.277) 0:05:23.765 ********** 2025-05-31 16:25:43.764593 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.764601 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.764608 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.764616 | orchestrator | 2025-05-31 16:25:43.764624 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-31 16:25:43.764631 | orchestrator | Saturday 31 May 2025 16:18:58 +0000 (0:00:00.432) 0:05:24.198 ********** 2025-05-31 16:25:43.764639 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.764647 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.764654 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.764662 | orchestrator | 2025-05-31 16:25:43.764670 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-31 16:25:43.764677 | orchestrator | Saturday 31 May 2025 16:18:58 +0000 (0:00:00.288) 0:05:24.487 ********** 2025-05-31 16:25:43.764685 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.764693 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.764700 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.764708 | orchestrator | 2025-05-31 16:25:43.764716 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-31 16:25:43.764724 | orchestrator | Saturday 31 May 2025 16:18:59 +0000 (0:00:00.303) 0:05:24.790 ********** 2025-05-31 16:25:43.764731 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.764739 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.764752 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.764759 | orchestrator | 2025-05-31 16:25:43.764767 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-31 16:25:43.764775 | orchestrator | Saturday 31 May 2025 16:18:59 +0000 (0:00:00.331) 0:05:25.122 ********** 2025-05-31 16:25:43.764783 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.764791 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.764798 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.764806 | orchestrator | 2025-05-31 16:25:43.764814 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-31 16:25:43.764821 | orchestrator | Saturday 31 May 2025 16:19:00 +0000 (0:00:00.463) 0:05:25.585 ********** 2025-05-31 16:25:43.764829 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.764837 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.764844 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.764852 | orchestrator | 2025-05-31 16:25:43.764860 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-31 16:25:43.764868 | orchestrator | Saturday 31 May 2025 16:19:00 +0000 (0:00:00.319) 0:05:25.905 ********** 2025-05-31 16:25:43.764875 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-31 16:25:43.764901 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-31 16:25:43.764909 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.764917 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-31 16:25:43.764925 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-31 16:25:43.764932 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.764940 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-31 16:25:43.764948 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-31 16:25:43.764959 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.764967 | orchestrator | 2025-05-31 16:25:43.764975 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-31 16:25:43.764983 | orchestrator | Saturday 31 May 2025 16:19:00 +0000 (0:00:00.400) 0:05:26.305 ********** 2025-05-31 16:25:43.764990 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-31 16:25:43.764998 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-31 16:25:43.765006 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-31 16:25:43.765013 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-31 16:25:43.765021 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.765029 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.765036 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-31 16:25:43.765044 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-31 16:25:43.765052 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.765060 | orchestrator | 2025-05-31 16:25:43.765068 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-31 16:25:43.765075 | orchestrator | Saturday 31 May 2025 16:19:01 +0000 (0:00:00.336) 0:05:26.642 ********** 2025-05-31 16:25:43.765083 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.765091 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.765098 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.765106 | orchestrator | 2025-05-31 16:25:43.765114 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-31 16:25:43.765145 | orchestrator | Saturday 31 May 2025 16:19:01 +0000 (0:00:00.450) 0:05:27.093 ********** 2025-05-31 16:25:43.765154 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.765162 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.765170 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.765177 | orchestrator | 2025-05-31 16:25:43.765185 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-31 16:25:43.765193 | orchestrator | Saturday 31 May 2025 16:19:01 +0000 (0:00:00.326) 0:05:27.420 ********** 2025-05-31 16:25:43.765206 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.765214 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.765222 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.765229 | orchestrator | 2025-05-31 16:25:43.765237 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-31 16:25:43.765245 | orchestrator | Saturday 31 May 2025 16:19:02 +0000 (0:00:00.298) 0:05:27.719 ********** 2025-05-31 16:25:43.765253 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.765260 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.765268 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.765276 | orchestrator | 2025-05-31 16:25:43.765283 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-31 16:25:43.765291 | orchestrator | Saturday 31 May 2025 16:19:02 +0000 (0:00:00.326) 0:05:28.045 ********** 2025-05-31 16:25:43.765299 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.765307 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.765314 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.765322 | orchestrator | 2025-05-31 16:25:43.765330 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-31 16:25:43.765338 | orchestrator | Saturday 31 May 2025 16:19:03 +0000 (0:00:00.459) 0:05:28.505 ********** 2025-05-31 16:25:43.765345 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.765353 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.765360 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.765368 | orchestrator | 2025-05-31 16:25:43.765376 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-31 16:25:43.765384 | orchestrator | Saturday 31 May 2025 16:19:03 +0000 (0:00:00.343) 0:05:28.848 ********** 2025-05-31 16:25:43.765391 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-31 16:25:43.765399 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-31 16:25:43.765407 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-31 16:25:43.765415 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.765423 | orchestrator | 2025-05-31 16:25:43.765430 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-31 16:25:43.765438 | orchestrator | Saturday 31 May 2025 16:19:03 +0000 (0:00:00.373) 0:05:29.222 ********** 2025-05-31 16:25:43.765446 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-31 16:25:43.765454 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-31 16:25:43.765461 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-31 16:25:43.765469 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.765477 | orchestrator | 2025-05-31 16:25:43.765484 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-31 16:25:43.765492 | orchestrator | Saturday 31 May 2025 16:19:04 +0000 (0:00:00.393) 0:05:29.615 ********** 2025-05-31 16:25:43.765500 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-31 16:25:43.765508 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-31 16:25:43.765515 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-31 16:25:43.765523 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.765531 | orchestrator | 2025-05-31 16:25:43.765539 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-31 16:25:43.765546 | orchestrator | Saturday 31 May 2025 16:19:04 +0000 (0:00:00.373) 0:05:29.989 ********** 2025-05-31 16:25:43.765554 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.765562 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.765570 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.765577 | orchestrator | 2025-05-31 16:25:43.765585 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-31 16:25:43.765593 | orchestrator | Saturday 31 May 2025 16:19:05 +0000 (0:00:00.599) 0:05:30.588 ********** 2025-05-31 16:25:43.765605 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-31 16:25:43.765617 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.765625 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-31 16:25:43.765633 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.765640 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-31 16:25:43.765648 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.765655 | orchestrator | 2025-05-31 16:25:43.765663 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-31 16:25:43.765671 | orchestrator | Saturday 31 May 2025 16:19:05 +0000 (0:00:00.506) 0:05:31.094 ********** 2025-05-31 16:25:43.765679 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.765686 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.765694 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.765701 | orchestrator | 2025-05-31 16:25:43.765709 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-31 16:25:43.765717 | orchestrator | Saturday 31 May 2025 16:19:05 +0000 (0:00:00.345) 0:05:31.440 ********** 2025-05-31 16:25:43.765725 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.765733 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.765740 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.765748 | orchestrator | 2025-05-31 16:25:43.765756 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-31 16:25:43.765764 | orchestrator | Saturday 31 May 2025 16:19:06 +0000 (0:00:00.340) 0:05:31.780 ********** 2025-05-31 16:25:43.765771 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-31 16:25:43.765779 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.765787 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-31 16:25:43.765795 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.765824 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-31 16:25:43.765834 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.765841 | orchestrator | 2025-05-31 16:25:43.765849 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-31 16:25:43.765857 | orchestrator | Saturday 31 May 2025 16:19:07 +0000 (0:00:00.943) 0:05:32.724 ********** 2025-05-31 16:25:43.765865 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.765873 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.765939 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.765949 | orchestrator | 2025-05-31 16:25:43.765956 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-31 16:25:43.765964 | orchestrator | Saturday 31 May 2025 16:19:07 +0000 (0:00:00.357) 0:05:33.081 ********** 2025-05-31 16:25:43.765972 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-31 16:25:43.765980 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-31 16:25:43.765987 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-31 16:25:43.765995 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.766003 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-31 16:25:43.766011 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-31 16:25:43.766042 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-31 16:25:43.766050 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.766057 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-31 16:25:43.766065 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-31 16:25:43.766073 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-31 16:25:43.766081 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.766088 | orchestrator | 2025-05-31 16:25:43.766096 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-31 16:25:43.766104 | orchestrator | Saturday 31 May 2025 16:19:08 +0000 (0:00:00.646) 0:05:33.728 ********** 2025-05-31 16:25:43.766112 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.766126 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.766134 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.766141 | orchestrator | 2025-05-31 16:25:43.766149 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-31 16:25:43.766157 | orchestrator | Saturday 31 May 2025 16:19:09 +0000 (0:00:00.805) 0:05:34.534 ********** 2025-05-31 16:25:43.766164 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.766172 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.766180 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.766187 | orchestrator | 2025-05-31 16:25:43.766195 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-31 16:25:43.766203 | orchestrator | Saturday 31 May 2025 16:19:09 +0000 (0:00:00.568) 0:05:35.102 ********** 2025-05-31 16:25:43.766211 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.766219 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.766227 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.766234 | orchestrator | 2025-05-31 16:25:43.766242 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-31 16:25:43.766250 | orchestrator | Saturday 31 May 2025 16:19:10 +0000 (0:00:00.820) 0:05:35.922 ********** 2025-05-31 16:25:43.766258 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.766266 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.766274 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.766281 | orchestrator | 2025-05-31 16:25:43.766289 | orchestrator | TASK [ceph-mgr : set_fact container_exec_cmd] ********************************** 2025-05-31 16:25:43.766297 | orchestrator | Saturday 31 May 2025 16:19:11 +0000 (0:00:00.617) 0:05:36.540 ********** 2025-05-31 16:25:43.766305 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-31 16:25:43.766312 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-31 16:25:43.766320 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-31 16:25:43.766328 | orchestrator | 2025-05-31 16:25:43.766336 | orchestrator | TASK [ceph-mgr : include common.yml] ******************************************* 2025-05-31 16:25:43.766344 | orchestrator | Saturday 31 May 2025 16:19:12 +0000 (0:00:01.073) 0:05:37.613 ********** 2025-05-31 16:25:43.766356 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:25:43.766364 | orchestrator | 2025-05-31 16:25:43.766372 | orchestrator | TASK [ceph-mgr : create mgr directory] ***************************************** 2025-05-31 16:25:43.766379 | orchestrator | Saturday 31 May 2025 16:19:12 +0000 (0:00:00.556) 0:05:38.169 ********** 2025-05-31 16:25:43.766387 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:43.766394 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:25:43.766402 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:25:43.766410 | orchestrator | 2025-05-31 16:25:43.766417 | orchestrator | TASK [ceph-mgr : fetch ceph mgr keyring] *************************************** 2025-05-31 16:25:43.766425 | orchestrator | Saturday 31 May 2025 16:19:13 +0000 (0:00:00.782) 0:05:38.952 ********** 2025-05-31 16:25:43.766432 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.766440 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.766448 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.766455 | orchestrator | 2025-05-31 16:25:43.766463 | orchestrator | TASK [ceph-mgr : create ceph mgr keyring(s) on a mon node] ********************* 2025-05-31 16:25:43.766471 | orchestrator | Saturday 31 May 2025 16:19:14 +0000 (0:00:00.596) 0:05:39.549 ********** 2025-05-31 16:25:43.766478 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-31 16:25:43.766486 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-31 16:25:43.766494 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-31 16:25:43.766502 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-05-31 16:25:43.766510 | orchestrator | 2025-05-31 16:25:43.766518 | orchestrator | TASK [ceph-mgr : set_fact _mgr_keys] ******************************************* 2025-05-31 16:25:43.766531 | orchestrator | Saturday 31 May 2025 16:19:21 +0000 (0:00:07.687) 0:05:47.237 ********** 2025-05-31 16:25:43.766566 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.766575 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.766583 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.766590 | orchestrator | 2025-05-31 16:25:43.766598 | orchestrator | TASK [ceph-mgr : get keys from monitors] *************************************** 2025-05-31 16:25:43.766606 | orchestrator | Saturday 31 May 2025 16:19:22 +0000 (0:00:00.327) 0:05:47.564 ********** 2025-05-31 16:25:43.766614 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-31 16:25:43.766622 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-31 16:25:43.766629 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-31 16:25:43.766637 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-05-31 16:25:43.766645 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 16:25:43.766653 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 16:25:43.766661 | orchestrator | 2025-05-31 16:25:43.766669 | orchestrator | TASK [ceph-mgr : copy ceph key(s) if needed] *********************************** 2025-05-31 16:25:43.766676 | orchestrator | Saturday 31 May 2025 16:19:23 +0000 (0:00:01.742) 0:05:49.306 ********** 2025-05-31 16:25:43.766684 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-31 16:25:43.766692 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-31 16:25:43.766700 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-31 16:25:43.766707 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-31 16:25:43.766715 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-05-31 16:25:43.766723 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-05-31 16:25:43.766731 | orchestrator | 2025-05-31 16:25:43.766739 | orchestrator | TASK [ceph-mgr : set mgr key permissions] ************************************** 2025-05-31 16:25:43.766747 | orchestrator | Saturday 31 May 2025 16:19:24 +0000 (0:00:01.176) 0:05:50.483 ********** 2025-05-31 16:25:43.766754 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.766762 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.766770 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.766777 | orchestrator | 2025-05-31 16:25:43.766785 | orchestrator | TASK [ceph-mgr : append dashboard modules to ceph_mgr_modules] ***************** 2025-05-31 16:25:43.766793 | orchestrator | Saturday 31 May 2025 16:19:25 +0000 (0:00:00.695) 0:05:51.178 ********** 2025-05-31 16:25:43.766801 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.766809 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.766816 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.766824 | orchestrator | 2025-05-31 16:25:43.766832 | orchestrator | TASK [ceph-mgr : include pre_requisite.yml] ************************************ 2025-05-31 16:25:43.766840 | orchestrator | Saturday 31 May 2025 16:19:26 +0000 (0:00:00.558) 0:05:51.737 ********** 2025-05-31 16:25:43.766848 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.766855 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.766863 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.766871 | orchestrator | 2025-05-31 16:25:43.766926 | orchestrator | TASK [ceph-mgr : include start_mgr.yml] **************************************** 2025-05-31 16:25:43.766935 | orchestrator | Saturday 31 May 2025 16:19:26 +0000 (0:00:00.359) 0:05:52.096 ********** 2025-05-31 16:25:43.766943 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:25:43.766951 | orchestrator | 2025-05-31 16:25:43.766976 | orchestrator | TASK [ceph-mgr : ensure systemd service override directory exists] ************* 2025-05-31 16:25:43.766985 | orchestrator | Saturday 31 May 2025 16:19:27 +0000 (0:00:00.546) 0:05:52.642 ********** 2025-05-31 16:25:43.766993 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.767000 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.767008 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.767016 | orchestrator | 2025-05-31 16:25:43.767024 | orchestrator | TASK [ceph-mgr : add ceph-mgr systemd service overrides] *********************** 2025-05-31 16:25:43.767037 | orchestrator | Saturday 31 May 2025 16:19:27 +0000 (0:00:00.569) 0:05:53.211 ********** 2025-05-31 16:25:43.767045 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.767053 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.767060 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.767068 | orchestrator | 2025-05-31 16:25:43.767076 | orchestrator | TASK [ceph-mgr : include_tasks systemd.yml] ************************************ 2025-05-31 16:25:43.767084 | orchestrator | Saturday 31 May 2025 16:19:28 +0000 (0:00:00.348) 0:05:53.560 ********** 2025-05-31 16:25:43.767096 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:25:43.767104 | orchestrator | 2025-05-31 16:25:43.767112 | orchestrator | TASK [ceph-mgr : generate systemd unit file] *********************************** 2025-05-31 16:25:43.767120 | orchestrator | Saturday 31 May 2025 16:19:28 +0000 (0:00:00.525) 0:05:54.085 ********** 2025-05-31 16:25:43.767127 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:43.767135 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:25:43.767143 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:25:43.767150 | orchestrator | 2025-05-31 16:25:43.767158 | orchestrator | TASK [ceph-mgr : generate systemd ceph-mgr target file] ************************ 2025-05-31 16:25:43.767166 | orchestrator | Saturday 31 May 2025 16:19:30 +0000 (0:00:01.524) 0:05:55.609 ********** 2025-05-31 16:25:43.767173 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:43.767181 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:25:43.767189 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:25:43.767196 | orchestrator | 2025-05-31 16:25:43.767204 | orchestrator | TASK [ceph-mgr : enable ceph-mgr.target] *************************************** 2025-05-31 16:25:43.767212 | orchestrator | Saturday 31 May 2025 16:19:31 +0000 (0:00:01.179) 0:05:56.789 ********** 2025-05-31 16:25:43.767219 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:43.767227 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:25:43.767234 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:25:43.767242 | orchestrator | 2025-05-31 16:25:43.767250 | orchestrator | TASK [ceph-mgr : systemd start mgr] ******************************************** 2025-05-31 16:25:43.767258 | orchestrator | Saturday 31 May 2025 16:19:33 +0000 (0:00:01.798) 0:05:58.588 ********** 2025-05-31 16:25:43.767266 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:43.767274 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:25:43.767306 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:25:43.767316 | orchestrator | 2025-05-31 16:25:43.767323 | orchestrator | TASK [ceph-mgr : include mgr_modules.yml] ************************************** 2025-05-31 16:25:43.767331 | orchestrator | Saturday 31 May 2025 16:19:35 +0000 (0:00:02.221) 0:06:00.809 ********** 2025-05-31 16:25:43.767339 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.767347 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.767354 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-05-31 16:25:43.767362 | orchestrator | 2025-05-31 16:25:43.767370 | orchestrator | TASK [ceph-mgr : wait for all mgr to be up] ************************************ 2025-05-31 16:25:43.767378 | orchestrator | Saturday 31 May 2025 16:19:35 +0000 (0:00:00.581) 0:06:01.391 ********** 2025-05-31 16:25:43.767386 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (30 retries left). 2025-05-31 16:25:43.767394 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (29 retries left). 2025-05-31 16:25:43.767402 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-05-31 16:25:43.767409 | orchestrator | 2025-05-31 16:25:43.767417 | orchestrator | TASK [ceph-mgr : get enabled modules from ceph-mgr] **************************** 2025-05-31 16:25:43.767425 | orchestrator | Saturday 31 May 2025 16:19:49 +0000 (0:00:13.326) 0:06:14.717 ********** 2025-05-31 16:25:43.767433 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-05-31 16:25:43.767440 | orchestrator | 2025-05-31 16:25:43.767448 | orchestrator | TASK [ceph-mgr : set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-05-31 16:25:43.767464 | orchestrator | Saturday 31 May 2025 16:19:50 +0000 (0:00:01.725) 0:06:16.443 ********** 2025-05-31 16:25:43.767472 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.767480 | orchestrator | 2025-05-31 16:25:43.767488 | orchestrator | TASK [ceph-mgr : set _disabled_ceph_mgr_modules fact] ************************** 2025-05-31 16:25:43.767495 | orchestrator | Saturday 31 May 2025 16:19:51 +0000 (0:00:00.433) 0:06:16.876 ********** 2025-05-31 16:25:43.767503 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.767511 | orchestrator | 2025-05-31 16:25:43.767518 | orchestrator | TASK [ceph-mgr : disable ceph mgr enabled modules] ***************************** 2025-05-31 16:25:43.767526 | orchestrator | Saturday 31 May 2025 16:19:51 +0000 (0:00:00.269) 0:06:17.146 ********** 2025-05-31 16:25:43.767534 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-05-31 16:25:43.767542 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-05-31 16:25:43.767549 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-05-31 16:25:43.767557 | orchestrator | 2025-05-31 16:25:43.767565 | orchestrator | TASK [ceph-mgr : add modules to ceph-mgr] ************************************** 2025-05-31 16:25:43.767572 | orchestrator | Saturday 31 May 2025 16:19:57 +0000 (0:00:06.337) 0:06:23.484 ********** 2025-05-31 16:25:43.767580 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-05-31 16:25:43.767588 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-05-31 16:25:43.767596 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-05-31 16:25:43.767603 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-05-31 16:25:43.767611 | orchestrator | 2025-05-31 16:25:43.767619 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-31 16:25:43.767626 | orchestrator | Saturday 31 May 2025 16:20:02 +0000 (0:00:04.887) 0:06:28.371 ********** 2025-05-31 16:25:43.767634 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:43.767642 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:25:43.767649 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:25:43.767657 | orchestrator | 2025-05-31 16:25:43.767665 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-05-31 16:25:43.767672 | orchestrator | Saturday 31 May 2025 16:20:03 +0000 (0:00:00.678) 0:06:29.050 ********** 2025-05-31 16:25:43.767680 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:25:43.767688 | orchestrator | 2025-05-31 16:25:43.767696 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-05-31 16:25:43.767707 | orchestrator | Saturday 31 May 2025 16:20:04 +0000 (0:00:00.816) 0:06:29.866 ********** 2025-05-31 16:25:43.767715 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.767723 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.767731 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.767738 | orchestrator | 2025-05-31 16:25:43.767746 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-31 16:25:43.767754 | orchestrator | Saturday 31 May 2025 16:20:04 +0000 (0:00:00.335) 0:06:30.202 ********** 2025-05-31 16:25:43.767762 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:43.767770 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:25:43.767777 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:25:43.767785 | orchestrator | 2025-05-31 16:25:43.767793 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-05-31 16:25:43.767801 | orchestrator | Saturday 31 May 2025 16:20:05 +0000 (0:00:01.214) 0:06:31.417 ********** 2025-05-31 16:25:43.767808 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-31 16:25:43.767816 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-31 16:25:43.767824 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-31 16:25:43.767832 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.767844 | orchestrator | 2025-05-31 16:25:43.767852 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-05-31 16:25:43.767860 | orchestrator | Saturday 31 May 2025 16:20:06 +0000 (0:00:00.907) 0:06:32.324 ********** 2025-05-31 16:25:43.767868 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.767875 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.767899 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.767907 | orchestrator | 2025-05-31 16:25:43.767938 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-31 16:25:43.767947 | orchestrator | Saturday 31 May 2025 16:20:07 +0000 (0:00:00.352) 0:06:32.677 ********** 2025-05-31 16:25:43.767955 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:25:43.767963 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:43.767970 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:25:43.767978 | orchestrator | 2025-05-31 16:25:43.767986 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-05-31 16:25:43.767994 | orchestrator | 2025-05-31 16:25:43.768002 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-31 16:25:43.768009 | orchestrator | Saturday 31 May 2025 16:20:09 +0000 (0:00:02.030) 0:06:34.708 ********** 2025-05-31 16:25:43.768017 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:25:43.768025 | orchestrator | 2025-05-31 16:25:43.768033 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-31 16:25:43.768040 | orchestrator | Saturday 31 May 2025 16:20:09 +0000 (0:00:00.695) 0:06:35.403 ********** 2025-05-31 16:25:43.768048 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.768056 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.768063 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.768071 | orchestrator | 2025-05-31 16:25:43.768079 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-31 16:25:43.768087 | orchestrator | Saturday 31 May 2025 16:20:10 +0000 (0:00:00.366) 0:06:35.770 ********** 2025-05-31 16:25:43.768095 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.768102 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.768110 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.768118 | orchestrator | 2025-05-31 16:25:43.768126 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-31 16:25:43.768133 | orchestrator | Saturday 31 May 2025 16:20:10 +0000 (0:00:00.663) 0:06:36.434 ********** 2025-05-31 16:25:43.768141 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.768149 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.768157 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.768164 | orchestrator | 2025-05-31 16:25:43.768172 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-31 16:25:43.768180 | orchestrator | Saturday 31 May 2025 16:20:12 +0000 (0:00:01.063) 0:06:37.497 ********** 2025-05-31 16:25:43.768188 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.768195 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.768203 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.768211 | orchestrator | 2025-05-31 16:25:43.768219 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-31 16:25:43.768226 | orchestrator | Saturday 31 May 2025 16:20:12 +0000 (0:00:00.723) 0:06:38.221 ********** 2025-05-31 16:25:43.768234 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.768242 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.768250 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.768258 | orchestrator | 2025-05-31 16:25:43.768265 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-31 16:25:43.768273 | orchestrator | Saturday 31 May 2025 16:20:13 +0000 (0:00:00.350) 0:06:38.571 ********** 2025-05-31 16:25:43.768281 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.768289 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.768297 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.768310 | orchestrator | 2025-05-31 16:25:43.768318 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-31 16:25:43.768326 | orchestrator | Saturday 31 May 2025 16:20:13 +0000 (0:00:00.404) 0:06:38.975 ********** 2025-05-31 16:25:43.768333 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.768341 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.768349 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.768356 | orchestrator | 2025-05-31 16:25:43.768364 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-31 16:25:43.768372 | orchestrator | Saturday 31 May 2025 16:20:14 +0000 (0:00:00.610) 0:06:39.586 ********** 2025-05-31 16:25:43.768379 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.768387 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.768395 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.768402 | orchestrator | 2025-05-31 16:25:43.768410 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-31 16:25:43.768418 | orchestrator | Saturday 31 May 2025 16:20:14 +0000 (0:00:00.320) 0:06:39.906 ********** 2025-05-31 16:25:43.768430 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.768438 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.768446 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.768453 | orchestrator | 2025-05-31 16:25:43.768461 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-31 16:25:43.768469 | orchestrator | Saturday 31 May 2025 16:20:14 +0000 (0:00:00.313) 0:06:40.219 ********** 2025-05-31 16:25:43.768476 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.768484 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.768492 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.768499 | orchestrator | 2025-05-31 16:25:43.768507 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-31 16:25:43.768514 | orchestrator | Saturday 31 May 2025 16:20:15 +0000 (0:00:00.297) 0:06:40.516 ********** 2025-05-31 16:25:43.768522 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.768530 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.768537 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.768545 | orchestrator | 2025-05-31 16:25:43.768553 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-31 16:25:43.768560 | orchestrator | Saturday 31 May 2025 16:20:15 +0000 (0:00:00.904) 0:06:41.421 ********** 2025-05-31 16:25:43.768568 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.768576 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.768583 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.768591 | orchestrator | 2025-05-31 16:25:43.768599 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-31 16:25:43.768606 | orchestrator | Saturday 31 May 2025 16:20:16 +0000 (0:00:00.276) 0:06:41.697 ********** 2025-05-31 16:25:43.768614 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.768644 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.768653 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.768661 | orchestrator | 2025-05-31 16:25:43.768669 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-31 16:25:43.768677 | orchestrator | Saturday 31 May 2025 16:20:16 +0000 (0:00:00.251) 0:06:41.949 ********** 2025-05-31 16:25:43.768685 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.768693 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.768700 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.768708 | orchestrator | 2025-05-31 16:25:43.768716 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-31 16:25:43.768724 | orchestrator | Saturday 31 May 2025 16:20:16 +0000 (0:00:00.432) 0:06:42.381 ********** 2025-05-31 16:25:43.768732 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.768740 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.768748 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.768755 | orchestrator | 2025-05-31 16:25:43.768763 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-31 16:25:43.768800 | orchestrator | Saturday 31 May 2025 16:20:17 +0000 (0:00:00.298) 0:06:42.679 ********** 2025-05-31 16:25:43.768808 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.768816 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.768823 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.768831 | orchestrator | 2025-05-31 16:25:43.768839 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-31 16:25:43.768847 | orchestrator | Saturday 31 May 2025 16:20:17 +0000 (0:00:00.279) 0:06:42.959 ********** 2025-05-31 16:25:43.768854 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.768862 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.768870 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.768900 | orchestrator | 2025-05-31 16:25:43.768909 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-31 16:25:43.768917 | orchestrator | Saturday 31 May 2025 16:20:17 +0000 (0:00:00.299) 0:06:43.259 ********** 2025-05-31 16:25:43.768924 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.768932 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.768940 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.768948 | orchestrator | 2025-05-31 16:25:43.768956 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-31 16:25:43.768963 | orchestrator | Saturday 31 May 2025 16:20:18 +0000 (0:00:00.281) 0:06:43.540 ********** 2025-05-31 16:25:43.768971 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.768979 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.768986 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.768994 | orchestrator | 2025-05-31 16:25:43.769002 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-31 16:25:43.769010 | orchestrator | Saturday 31 May 2025 16:20:18 +0000 (0:00:00.449) 0:06:43.990 ********** 2025-05-31 16:25:43.769017 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.769025 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.769033 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.769040 | orchestrator | 2025-05-31 16:25:43.769048 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-31 16:25:43.769056 | orchestrator | Saturday 31 May 2025 16:20:18 +0000 (0:00:00.325) 0:06:44.316 ********** 2025-05-31 16:25:43.769064 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.769072 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.769079 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.769087 | orchestrator | 2025-05-31 16:25:43.769095 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-31 16:25:43.769102 | orchestrator | Saturday 31 May 2025 16:20:19 +0000 (0:00:00.274) 0:06:44.590 ********** 2025-05-31 16:25:43.769110 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.769118 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.769126 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.769133 | orchestrator | 2025-05-31 16:25:43.769141 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-31 16:25:43.769149 | orchestrator | Saturday 31 May 2025 16:20:19 +0000 (0:00:00.264) 0:06:44.854 ********** 2025-05-31 16:25:43.769157 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.769164 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.769172 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.769180 | orchestrator | 2025-05-31 16:25:43.769188 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-31 16:25:43.769195 | orchestrator | Saturday 31 May 2025 16:20:19 +0000 (0:00:00.436) 0:06:45.291 ********** 2025-05-31 16:25:43.769203 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.769215 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.769222 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.769230 | orchestrator | 2025-05-31 16:25:43.769238 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-31 16:25:43.769250 | orchestrator | Saturday 31 May 2025 16:20:20 +0000 (0:00:00.322) 0:06:45.613 ********** 2025-05-31 16:25:43.769258 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.769266 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.769274 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.769282 | orchestrator | 2025-05-31 16:25:43.769289 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-31 16:25:43.769297 | orchestrator | Saturday 31 May 2025 16:20:20 +0000 (0:00:00.339) 0:06:45.952 ********** 2025-05-31 16:25:43.769305 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.769313 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.769320 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.769328 | orchestrator | 2025-05-31 16:25:43.769336 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-31 16:25:43.769343 | orchestrator | Saturday 31 May 2025 16:20:20 +0000 (0:00:00.288) 0:06:46.240 ********** 2025-05-31 16:25:43.769351 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.769359 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.769366 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.769374 | orchestrator | 2025-05-31 16:25:43.769382 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-31 16:25:43.769390 | orchestrator | Saturday 31 May 2025 16:20:21 +0000 (0:00:00.616) 0:06:46.857 ********** 2025-05-31 16:25:43.769398 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.769431 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.769440 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.769448 | orchestrator | 2025-05-31 16:25:43.769456 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-31 16:25:43.769464 | orchestrator | Saturday 31 May 2025 16:20:21 +0000 (0:00:00.302) 0:06:47.160 ********** 2025-05-31 16:25:43.769472 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.769480 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.769487 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.769495 | orchestrator | 2025-05-31 16:25:43.769503 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-31 16:25:43.769511 | orchestrator | Saturday 31 May 2025 16:20:21 +0000 (0:00:00.311) 0:06:47.471 ********** 2025-05-31 16:25:43.769519 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.769527 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.769535 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.769542 | orchestrator | 2025-05-31 16:25:43.769550 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-31 16:25:43.769558 | orchestrator | Saturday 31 May 2025 16:20:22 +0000 (0:00:00.318) 0:06:47.789 ********** 2025-05-31 16:25:43.769566 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.769573 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.769581 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.769589 | orchestrator | 2025-05-31 16:25:43.769597 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-31 16:25:43.769605 | orchestrator | Saturday 31 May 2025 16:20:22 +0000 (0:00:00.625) 0:06:48.414 ********** 2025-05-31 16:25:43.769612 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.769620 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.769628 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.769636 | orchestrator | 2025-05-31 16:25:43.769644 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-31 16:25:43.769651 | orchestrator | Saturday 31 May 2025 16:20:23 +0000 (0:00:00.333) 0:06:48.748 ********** 2025-05-31 16:25:43.769659 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-31 16:25:43.769667 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-31 16:25:43.769675 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-31 16:25:43.769683 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.769695 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-31 16:25:43.769703 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.769711 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-31 16:25:43.769719 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-31 16:25:43.769726 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.769734 | orchestrator | 2025-05-31 16:25:43.769742 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-31 16:25:43.769750 | orchestrator | Saturday 31 May 2025 16:20:23 +0000 (0:00:00.369) 0:06:49.117 ********** 2025-05-31 16:25:43.769758 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-31 16:25:43.769765 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-31 16:25:43.769773 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-31 16:25:43.769781 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.769789 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-31 16:25:43.769796 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.769804 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-31 16:25:43.769812 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-31 16:25:43.769820 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.769827 | orchestrator | 2025-05-31 16:25:43.769835 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-31 16:25:43.769843 | orchestrator | Saturday 31 May 2025 16:20:23 +0000 (0:00:00.356) 0:06:49.474 ********** 2025-05-31 16:25:43.769851 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.769858 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.769866 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.769874 | orchestrator | 2025-05-31 16:25:43.769919 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-31 16:25:43.769927 | orchestrator | Saturday 31 May 2025 16:20:24 +0000 (0:00:00.593) 0:06:50.068 ********** 2025-05-31 16:25:43.769939 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.769947 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.769955 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.769963 | orchestrator | 2025-05-31 16:25:43.769971 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-31 16:25:43.769979 | orchestrator | Saturday 31 May 2025 16:20:24 +0000 (0:00:00.329) 0:06:50.398 ********** 2025-05-31 16:25:43.769986 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.769994 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.770001 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.770009 | orchestrator | 2025-05-31 16:25:43.770037 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-31 16:25:43.770047 | orchestrator | Saturday 31 May 2025 16:20:25 +0000 (0:00:00.318) 0:06:50.716 ********** 2025-05-31 16:25:43.770055 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.770062 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.770070 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.770078 | orchestrator | 2025-05-31 16:25:43.770085 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-31 16:25:43.770093 | orchestrator | Saturday 31 May 2025 16:20:25 +0000 (0:00:00.321) 0:06:51.037 ********** 2025-05-31 16:25:43.770101 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.770109 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.770116 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.770124 | orchestrator | 2025-05-31 16:25:43.770132 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-31 16:25:43.770166 | orchestrator | Saturday 31 May 2025 16:20:26 +0000 (0:00:00.592) 0:06:51.630 ********** 2025-05-31 16:25:43.770175 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.770183 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.770197 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.770205 | orchestrator | 2025-05-31 16:25:43.770213 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-31 16:25:43.770220 | orchestrator | Saturday 31 May 2025 16:20:26 +0000 (0:00:00.370) 0:06:52.001 ********** 2025-05-31 16:25:43.770228 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 16:25:43.770236 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 16:25:43.770244 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 16:25:43.770251 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.770259 | orchestrator | 2025-05-31 16:25:43.770267 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-31 16:25:43.770274 | orchestrator | Saturday 31 May 2025 16:20:26 +0000 (0:00:00.420) 0:06:52.422 ********** 2025-05-31 16:25:43.770282 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 16:25:43.770290 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 16:25:43.770298 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 16:25:43.770306 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.770313 | orchestrator | 2025-05-31 16:25:43.770321 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-31 16:25:43.770329 | orchestrator | Saturday 31 May 2025 16:20:27 +0000 (0:00:00.420) 0:06:52.842 ********** 2025-05-31 16:25:43.770337 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 16:25:43.770345 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 16:25:43.770353 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 16:25:43.770360 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.770368 | orchestrator | 2025-05-31 16:25:43.770376 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-31 16:25:43.770383 | orchestrator | Saturday 31 May 2025 16:20:27 +0000 (0:00:00.392) 0:06:53.234 ********** 2025-05-31 16:25:43.770391 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.770399 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.770407 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.770414 | orchestrator | 2025-05-31 16:25:43.770422 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-31 16:25:43.770430 | orchestrator | Saturday 31 May 2025 16:20:28 +0000 (0:00:00.610) 0:06:53.844 ********** 2025-05-31 16:25:43.770437 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-31 16:25:43.770445 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.770453 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-31 16:25:43.770461 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.770468 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-31 16:25:43.770476 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.770484 | orchestrator | 2025-05-31 16:25:43.770491 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-31 16:25:43.770499 | orchestrator | Saturday 31 May 2025 16:20:28 +0000 (0:00:00.489) 0:06:54.334 ********** 2025-05-31 16:25:43.770507 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.770515 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.770522 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.770530 | orchestrator | 2025-05-31 16:25:43.770538 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-31 16:25:43.770545 | orchestrator | Saturday 31 May 2025 16:20:29 +0000 (0:00:00.375) 0:06:54.710 ********** 2025-05-31 16:25:43.770553 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.770561 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.770568 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.770576 | orchestrator | 2025-05-31 16:25:43.770584 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-31 16:25:43.770592 | orchestrator | Saturday 31 May 2025 16:20:29 +0000 (0:00:00.331) 0:06:55.041 ********** 2025-05-31 16:25:43.770603 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-31 16:25:43.770611 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.770619 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-31 16:25:43.770626 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.770634 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-31 16:25:43.770646 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.770653 | orchestrator | 2025-05-31 16:25:43.770661 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-31 16:25:43.770669 | orchestrator | Saturday 31 May 2025 16:20:30 +0000 (0:00:00.966) 0:06:56.008 ********** 2025-05-31 16:25:43.770677 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-31 16:25:43.770685 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.770693 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-31 16:25:43.770700 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.770708 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-31 16:25:43.770716 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.770724 | orchestrator | 2025-05-31 16:25:43.770732 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-31 16:25:43.770740 | orchestrator | Saturday 31 May 2025 16:20:30 +0000 (0:00:00.423) 0:06:56.431 ********** 2025-05-31 16:25:43.770748 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 16:25:43.770755 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 16:25:43.770763 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 16:25:43.770793 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-31 16:25:43.770803 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-31 16:25:43.770811 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-31 16:25:43.770819 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.770827 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.770834 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-31 16:25:43.770843 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-31 16:25:43.770850 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-31 16:25:43.770858 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.770866 | orchestrator | 2025-05-31 16:25:43.770874 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-31 16:25:43.770899 | orchestrator | Saturday 31 May 2025 16:20:31 +0000 (0:00:00.735) 0:06:57.166 ********** 2025-05-31 16:25:43.770908 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.770916 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.770923 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.770931 | orchestrator | 2025-05-31 16:25:43.770939 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-31 16:25:43.770947 | orchestrator | Saturday 31 May 2025 16:20:32 +0000 (0:00:00.763) 0:06:57.930 ********** 2025-05-31 16:25:43.770954 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-31 16:25:43.770962 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.770970 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-31 16:25:43.770978 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.770985 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-31 16:25:43.770993 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.771001 | orchestrator | 2025-05-31 16:25:43.771009 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-31 16:25:43.771016 | orchestrator | Saturday 31 May 2025 16:20:32 +0000 (0:00:00.533) 0:06:58.463 ********** 2025-05-31 16:25:43.771030 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.771038 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.771045 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.771053 | orchestrator | 2025-05-31 16:25:43.771061 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-31 16:25:43.771069 | orchestrator | Saturday 31 May 2025 16:20:33 +0000 (0:00:00.760) 0:06:59.224 ********** 2025-05-31 16:25:43.771077 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.771084 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.771092 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.771100 | orchestrator | 2025-05-31 16:25:43.771108 | orchestrator | TASK [ceph-osd : set_fact add_osd] ********************************************* 2025-05-31 16:25:43.771115 | orchestrator | Saturday 31 May 2025 16:20:34 +0000 (0:00:00.451) 0:06:59.675 ********** 2025-05-31 16:25:43.771123 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.771131 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.771139 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.771146 | orchestrator | 2025-05-31 16:25:43.771154 | orchestrator | TASK [ceph-osd : set_fact container_exec_cmd] ********************************** 2025-05-31 16:25:43.771162 | orchestrator | Saturday 31 May 2025 16:20:34 +0000 (0:00:00.369) 0:07:00.044 ********** 2025-05-31 16:25:43.771171 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-31 16:25:43.771179 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-31 16:25:43.771186 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-31 16:25:43.771194 | orchestrator | 2025-05-31 16:25:43.771202 | orchestrator | TASK [ceph-osd : include_tasks system_tuning.yml] ****************************** 2025-05-31 16:25:43.771210 | orchestrator | Saturday 31 May 2025 16:20:35 +0000 (0:00:00.581) 0:07:00.626 ********** 2025-05-31 16:25:43.771218 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:25:43.771226 | orchestrator | 2025-05-31 16:25:43.771234 | orchestrator | TASK [ceph-osd : disable osd directory parsing by updatedb] ******************** 2025-05-31 16:25:43.771241 | orchestrator | Saturday 31 May 2025 16:20:35 +0000 (0:00:00.493) 0:07:01.119 ********** 2025-05-31 16:25:43.771249 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.771257 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.771269 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.771277 | orchestrator | 2025-05-31 16:25:43.771284 | orchestrator | TASK [ceph-osd : disable osd directory path in updatedb.conf] ****************** 2025-05-31 16:25:43.771292 | orchestrator | Saturday 31 May 2025 16:20:35 +0000 (0:00:00.253) 0:07:01.373 ********** 2025-05-31 16:25:43.771300 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.771308 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.771316 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.771324 | orchestrator | 2025-05-31 16:25:43.771332 | orchestrator | TASK [ceph-osd : create tmpfiles.d directory] ********************************** 2025-05-31 16:25:43.771339 | orchestrator | Saturday 31 May 2025 16:20:36 +0000 (0:00:00.431) 0:07:01.804 ********** 2025-05-31 16:25:43.771347 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.771355 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.771362 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.771370 | orchestrator | 2025-05-31 16:25:43.771377 | orchestrator | TASK [ceph-osd : disable transparent hugepage] ********************************* 2025-05-31 16:25:43.771385 | orchestrator | Saturday 31 May 2025 16:20:36 +0000 (0:00:00.278) 0:07:02.082 ********** 2025-05-31 16:25:43.771393 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.771401 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.771408 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.771416 | orchestrator | 2025-05-31 16:25:43.771424 | orchestrator | TASK [ceph-osd : get default vm.min_free_kbytes] ******************************* 2025-05-31 16:25:43.771454 | orchestrator | Saturday 31 May 2025 16:20:36 +0000 (0:00:00.260) 0:07:02.342 ********** 2025-05-31 16:25:43.771462 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.771470 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.771477 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.771485 | orchestrator | 2025-05-31 16:25:43.771516 | orchestrator | TASK [ceph-osd : set_fact vm_min_free_kbytes] ********************************** 2025-05-31 16:25:43.771525 | orchestrator | Saturday 31 May 2025 16:20:37 +0000 (0:00:00.596) 0:07:02.939 ********** 2025-05-31 16:25:43.771533 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.771541 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.771549 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.771556 | orchestrator | 2025-05-31 16:25:43.771564 | orchestrator | TASK [ceph-osd : apply operating system tuning] ******************************** 2025-05-31 16:25:43.771572 | orchestrator | Saturday 31 May 2025 16:20:37 +0000 (0:00:00.434) 0:07:03.373 ********** 2025-05-31 16:25:43.771580 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-31 16:25:43.771588 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-31 16:25:43.771596 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-31 16:25:43.771604 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-31 16:25:43.771612 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-31 16:25:43.771620 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-31 16:25:43.771627 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-31 16:25:43.771635 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-31 16:25:43.771643 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-31 16:25:43.771651 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-31 16:25:43.771658 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-31 16:25:43.771666 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-31 16:25:43.771674 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-31 16:25:43.771682 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-31 16:25:43.771689 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-31 16:25:43.771697 | orchestrator | 2025-05-31 16:25:43.771705 | orchestrator | TASK [ceph-osd : install dependencies] ***************************************** 2025-05-31 16:25:43.771713 | orchestrator | Saturday 31 May 2025 16:20:41 +0000 (0:00:03.259) 0:07:06.633 ********** 2025-05-31 16:25:43.771720 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.771728 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.771736 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.771744 | orchestrator | 2025-05-31 16:25:43.771752 | orchestrator | TASK [ceph-osd : include_tasks common.yml] ************************************* 2025-05-31 16:25:43.771759 | orchestrator | Saturday 31 May 2025 16:20:41 +0000 (0:00:00.368) 0:07:07.001 ********** 2025-05-31 16:25:43.771767 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:25:43.771775 | orchestrator | 2025-05-31 16:25:43.771783 | orchestrator | TASK [ceph-osd : create bootstrap-osd and osd directories] ********************* 2025-05-31 16:25:43.771791 | orchestrator | Saturday 31 May 2025 16:20:42 +0000 (0:00:00.607) 0:07:07.608 ********** 2025-05-31 16:25:43.771799 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-31 16:25:43.771806 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-31 16:25:43.771820 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-31 16:25:43.771828 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-05-31 16:25:43.771836 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-05-31 16:25:43.771843 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-05-31 16:25:43.771851 | orchestrator | 2025-05-31 16:25:43.771863 | orchestrator | TASK [ceph-osd : get keys from monitors] *************************************** 2025-05-31 16:25:43.771871 | orchestrator | Saturday 31 May 2025 16:20:43 +0000 (0:00:00.966) 0:07:08.575 ********** 2025-05-31 16:25:43.771916 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 16:25:43.771925 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-31 16:25:43.771933 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-31 16:25:43.771941 | orchestrator | 2025-05-31 16:25:43.771949 | orchestrator | TASK [ceph-osd : copy ceph key(s) if needed] *********************************** 2025-05-31 16:25:43.771957 | orchestrator | Saturday 31 May 2025 16:20:44 +0000 (0:00:01.814) 0:07:10.390 ********** 2025-05-31 16:25:43.771964 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-31 16:25:43.771972 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-31 16:25:43.771980 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:25:43.771988 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-31 16:25:43.771996 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-31 16:25:43.772004 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:25:43.772015 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-31 16:25:43.772027 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-31 16:25:43.772041 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:25:43.772055 | orchestrator | 2025-05-31 16:25:43.772076 | orchestrator | TASK [ceph-osd : set noup flag] ************************************************ 2025-05-31 16:25:43.772092 | orchestrator | Saturday 31 May 2025 16:20:46 +0000 (0:00:01.460) 0:07:11.850 ********** 2025-05-31 16:25:43.772139 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-31 16:25:43.772153 | orchestrator | 2025-05-31 16:25:43.772165 | orchestrator | TASK [ceph-osd : include container_options_facts.yml] ************************** 2025-05-31 16:25:43.772177 | orchestrator | Saturday 31 May 2025 16:20:48 +0000 (0:00:02.358) 0:07:14.209 ********** 2025-05-31 16:25:43.772189 | orchestrator | included: /ansible/roles/ceph-osd/tasks/container_options_facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:25:43.772200 | orchestrator | 2025-05-31 16:25:43.772213 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=0'] *** 2025-05-31 16:25:43.772225 | orchestrator | Saturday 31 May 2025 16:20:49 +0000 (0:00:00.547) 0:07:14.756 ********** 2025-05-31 16:25:43.772236 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.772247 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.772258 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.772269 | orchestrator | 2025-05-31 16:25:43.772281 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=1'] *** 2025-05-31 16:25:43.772293 | orchestrator | Saturday 31 May 2025 16:20:49 +0000 (0:00:00.489) 0:07:15.245 ********** 2025-05-31 16:25:43.772305 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.772317 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.772329 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.772343 | orchestrator | 2025-05-31 16:25:43.772356 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=0'] *** 2025-05-31 16:25:43.772364 | orchestrator | Saturday 31 May 2025 16:20:50 +0000 (0:00:00.312) 0:07:15.558 ********** 2025-05-31 16:25:43.772372 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.772379 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.772387 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.772403 | orchestrator | 2025-05-31 16:25:43.772411 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=1'] *** 2025-05-31 16:25:43.772419 | orchestrator | Saturday 31 May 2025 16:20:50 +0000 (0:00:00.308) 0:07:15.866 ********** 2025-05-31 16:25:43.772427 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.772434 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.772441 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.772448 | orchestrator | 2025-05-31 16:25:43.772454 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm.yml] ****************************** 2025-05-31 16:25:43.772461 | orchestrator | Saturday 31 May 2025 16:20:50 +0000 (0:00:00.315) 0:07:16.182 ********** 2025-05-31 16:25:43.772467 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:25:43.772474 | orchestrator | 2025-05-31 16:25:43.772480 | orchestrator | TASK [ceph-osd : use ceph-volume to create bluestore osds] ********************* 2025-05-31 16:25:43.772487 | orchestrator | Saturday 31 May 2025 16:20:51 +0000 (0:00:00.850) 0:07:17.032 ********** 2025-05-31 16:25:43.772493 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ad7aff40-0fc1-546d-9ec3-a4c69926416d', 'data_vg': 'ceph-ad7aff40-0fc1-546d-9ec3-a4c69926416d'}) 2025-05-31 16:25:43.772501 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-6a818804-e2a7-5d8b-beae-a4acf44277a5', 'data_vg': 'ceph-6a818804-e2a7-5d8b-beae-a4acf44277a5'}) 2025-05-31 16:25:43.772507 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-e43a14fa-64bd-59a3-8350-23173f11027f', 'data_vg': 'ceph-e43a14fa-64bd-59a3-8350-23173f11027f'}) 2025-05-31 16:25:43.772514 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8b45f5b5-5599-560e-b955-f5f9e148b85f', 'data_vg': 'ceph-8b45f5b5-5599-560e-b955-f5f9e148b85f'}) 2025-05-31 16:25:43.772521 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-02409adc-b936-5a4c-b212-7809fa63c72a', 'data_vg': 'ceph-02409adc-b936-5a4c-b212-7809fa63c72a'}) 2025-05-31 16:25:43.772527 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-92adfeec-5c5c-5208-b88e-9a01a071247e', 'data_vg': 'ceph-92adfeec-5c5c-5208-b88e-9a01a071247e'}) 2025-05-31 16:25:43.772533 | orchestrator | 2025-05-31 16:25:43.772545 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm-batch.yml] ************************ 2025-05-31 16:25:43.772552 | orchestrator | Saturday 31 May 2025 16:21:30 +0000 (0:00:38.940) 0:07:55.972 ********** 2025-05-31 16:25:43.772558 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.772565 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.772571 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.772577 | orchestrator | 2025-05-31 16:25:43.772584 | orchestrator | TASK [ceph-osd : include_tasks start_osds.yml] ********************************* 2025-05-31 16:25:43.772590 | orchestrator | Saturday 31 May 2025 16:21:30 +0000 (0:00:00.430) 0:07:56.402 ********** 2025-05-31 16:25:43.772597 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:25:43.772603 | orchestrator | 2025-05-31 16:25:43.772609 | orchestrator | TASK [ceph-osd : get osd ids] ************************************************** 2025-05-31 16:25:43.772616 | orchestrator | Saturday 31 May 2025 16:21:31 +0000 (0:00:00.543) 0:07:56.946 ********** 2025-05-31 16:25:43.772622 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.772629 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.772635 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.772641 | orchestrator | 2025-05-31 16:25:43.772648 | orchestrator | TASK [ceph-osd : collect osd ids] ********************************************** 2025-05-31 16:25:43.772654 | orchestrator | Saturday 31 May 2025 16:21:32 +0000 (0:00:00.674) 0:07:57.621 ********** 2025-05-31 16:25:43.772660 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:25:43.772667 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:25:43.772673 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:25:43.772680 | orchestrator | 2025-05-31 16:25:43.772713 | orchestrator | TASK [ceph-osd : include_tasks systemd.yml] ************************************ 2025-05-31 16:25:43.772726 | orchestrator | Saturday 31 May 2025 16:21:34 +0000 (0:00:01.953) 0:07:59.574 ********** 2025-05-31 16:25:43.772733 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:25:43.772739 | orchestrator | 2025-05-31 16:25:43.772746 | orchestrator | TASK [ceph-osd : generate systemd unit file] *********************************** 2025-05-31 16:25:43.772752 | orchestrator | Saturday 31 May 2025 16:21:34 +0000 (0:00:00.601) 0:08:00.176 ********** 2025-05-31 16:25:43.772759 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:25:43.772765 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:25:43.772772 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:25:43.772778 | orchestrator | 2025-05-31 16:25:43.772785 | orchestrator | TASK [ceph-osd : generate systemd ceph-osd target file] ************************ 2025-05-31 16:25:43.772791 | orchestrator | Saturday 31 May 2025 16:21:36 +0000 (0:00:01.458) 0:08:01.634 ********** 2025-05-31 16:25:43.772798 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:25:43.772805 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:25:43.772811 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:25:43.772818 | orchestrator | 2025-05-31 16:25:43.772824 | orchestrator | TASK [ceph-osd : enable ceph-osd.target] *************************************** 2025-05-31 16:25:43.772831 | orchestrator | Saturday 31 May 2025 16:21:37 +0000 (0:00:01.184) 0:08:02.819 ********** 2025-05-31 16:25:43.772837 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:25:43.772844 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:25:43.772850 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:25:43.772857 | orchestrator | 2025-05-31 16:25:43.772864 | orchestrator | TASK [ceph-osd : ensure systemd service override directory exists] ************* 2025-05-31 16:25:43.772870 | orchestrator | Saturday 31 May 2025 16:21:39 +0000 (0:00:01.689) 0:08:04.508 ********** 2025-05-31 16:25:43.772895 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.772907 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.772923 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.772940 | orchestrator | 2025-05-31 16:25:43.772949 | orchestrator | TASK [ceph-osd : add ceph-osd systemd service overrides] *********************** 2025-05-31 16:25:43.772959 | orchestrator | Saturday 31 May 2025 16:21:39 +0000 (0:00:00.337) 0:08:04.845 ********** 2025-05-31 16:25:43.772969 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.772979 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.772990 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.772999 | orchestrator | 2025-05-31 16:25:43.773010 | orchestrator | TASK [ceph-osd : ensure "/var/lib/ceph/osd/{{ cluster }}-{{ item }}" is present] *** 2025-05-31 16:25:43.773019 | orchestrator | Saturday 31 May 2025 16:21:39 +0000 (0:00:00.633) 0:08:05.478 ********** 2025-05-31 16:25:43.773029 | orchestrator | ok: [testbed-node-3] => (item=2) 2025-05-31 16:25:43.773039 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-05-31 16:25:43.773048 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-31 16:25:43.773059 | orchestrator | ok: [testbed-node-3] => (item=5) 2025-05-31 16:25:43.773069 | orchestrator | ok: [testbed-node-4] => (item=3) 2025-05-31 16:25:43.773080 | orchestrator | ok: [testbed-node-5] => (item=4) 2025-05-31 16:25:43.773091 | orchestrator | 2025-05-31 16:25:43.773102 | orchestrator | TASK [ceph-osd : systemd start osd] ******************************************** 2025-05-31 16:25:43.773114 | orchestrator | Saturday 31 May 2025 16:21:41 +0000 (0:00:01.140) 0:08:06.619 ********** 2025-05-31 16:25:43.773121 | orchestrator | changed: [testbed-node-3] => (item=2) 2025-05-31 16:25:43.773128 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-05-31 16:25:43.773134 | orchestrator | changed: [testbed-node-5] => (item=0) 2025-05-31 16:25:43.773141 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-05-31 16:25:43.773147 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-05-31 16:25:43.773154 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-05-31 16:25:43.773161 | orchestrator | 2025-05-31 16:25:43.773167 | orchestrator | TASK [ceph-osd : unset noup flag] ********************************************** 2025-05-31 16:25:43.773174 | orchestrator | Saturday 31 May 2025 16:21:45 +0000 (0:00:03.947) 0:08:10.567 ********** 2025-05-31 16:25:43.773187 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.773193 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.773200 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-31 16:25:43.773207 | orchestrator | 2025-05-31 16:25:43.773213 | orchestrator | TASK [ceph-osd : wait for all osd to be up] ************************************ 2025-05-31 16:25:43.773220 | orchestrator | Saturday 31 May 2025 16:21:47 +0000 (0:00:02.064) 0:08:12.631 ********** 2025-05-31 16:25:43.773226 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.773233 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.773240 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: wait for all osd to be up (60 retries left). 2025-05-31 16:25:43.773247 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-31 16:25:43.773253 | orchestrator | 2025-05-31 16:25:43.773260 | orchestrator | TASK [ceph-osd : include crush_rules.yml] ************************************** 2025-05-31 16:25:43.773267 | orchestrator | Saturday 31 May 2025 16:21:59 +0000 (0:00:12.729) 0:08:25.361 ********** 2025-05-31 16:25:43.773273 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.773280 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.773286 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.773293 | orchestrator | 2025-05-31 16:25:43.773299 | orchestrator | TASK [ceph-osd : include openstack_config.yml] ********************************* 2025-05-31 16:25:43.773306 | orchestrator | Saturday 31 May 2025 16:22:00 +0000 (0:00:00.420) 0:08:25.781 ********** 2025-05-31 16:25:43.773312 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.773319 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.773325 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.773332 | orchestrator | 2025-05-31 16:25:43.773338 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-31 16:25:43.773345 | orchestrator | Saturday 31 May 2025 16:22:01 +0000 (0:00:01.076) 0:08:26.858 ********** 2025-05-31 16:25:43.773351 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:25:43.773358 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:25:43.773364 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:25:43.773371 | orchestrator | 2025-05-31 16:25:43.773377 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-05-31 16:25:43.773416 | orchestrator | Saturday 31 May 2025 16:22:02 +0000 (0:00:00.853) 0:08:27.712 ********** 2025-05-31 16:25:43.773425 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:25:43.773431 | orchestrator | 2025-05-31 16:25:43.773438 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact trigger_restart] ********************** 2025-05-31 16:25:43.773444 | orchestrator | Saturday 31 May 2025 16:22:02 +0000 (0:00:00.507) 0:08:28.219 ********** 2025-05-31 16:25:43.773451 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 16:25:43.773457 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 16:25:43.773464 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 16:25:43.773470 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.773477 | orchestrator | 2025-05-31 16:25:43.773484 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called before restart] ******** 2025-05-31 16:25:43.773514 | orchestrator | Saturday 31 May 2025 16:22:03 +0000 (0:00:00.389) 0:08:28.608 ********** 2025-05-31 16:25:43.773522 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.773528 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.773535 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.773541 | orchestrator | 2025-05-31 16:25:43.773548 | orchestrator | RUNNING HANDLER [ceph-handler : unset noup flag] ******************************* 2025-05-31 16:25:43.773554 | orchestrator | Saturday 31 May 2025 16:22:03 +0000 (0:00:00.299) 0:08:28.908 ********** 2025-05-31 16:25:43.773561 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.773568 | orchestrator | 2025-05-31 16:25:43.773580 | orchestrator | RUNNING HANDLER [ceph-handler : copy osd restart script] *********************** 2025-05-31 16:25:43.773587 | orchestrator | Saturday 31 May 2025 16:22:04 +0000 (0:00:00.727) 0:08:29.635 ********** 2025-05-31 16:25:43.773594 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.773600 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.773607 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.773613 | orchestrator | 2025-05-31 16:25:43.773620 | orchestrator | RUNNING HANDLER [ceph-handler : get pool list] ********************************* 2025-05-31 16:25:43.773626 | orchestrator | Saturday 31 May 2025 16:22:04 +0000 (0:00:00.409) 0:08:30.044 ********** 2025-05-31 16:25:43.773633 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.773639 | orchestrator | 2025-05-31 16:25:43.773646 | orchestrator | RUNNING HANDLER [ceph-handler : get balancer module status] ******************** 2025-05-31 16:25:43.773652 | orchestrator | Saturday 31 May 2025 16:22:04 +0000 (0:00:00.228) 0:08:30.273 ********** 2025-05-31 16:25:43.773658 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.773665 | orchestrator | 2025-05-31 16:25:43.773671 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-05-31 16:25:43.773678 | orchestrator | Saturday 31 May 2025 16:22:05 +0000 (0:00:00.300) 0:08:30.573 ********** 2025-05-31 16:25:43.773684 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.773691 | orchestrator | 2025-05-31 16:25:43.773697 | orchestrator | RUNNING HANDLER [ceph-handler : disable balancer] ****************************** 2025-05-31 16:25:43.773704 | orchestrator | Saturday 31 May 2025 16:22:05 +0000 (0:00:00.174) 0:08:30.748 ********** 2025-05-31 16:25:43.773710 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.773716 | orchestrator | 2025-05-31 16:25:43.773723 | orchestrator | RUNNING HANDLER [ceph-handler : disable pg autoscale on pools] ***************** 2025-05-31 16:25:43.773729 | orchestrator | Saturday 31 May 2025 16:22:05 +0000 (0:00:00.224) 0:08:30.972 ********** 2025-05-31 16:25:43.773736 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.773742 | orchestrator | 2025-05-31 16:25:43.773749 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph osds daemon(s)] ******************* 2025-05-31 16:25:43.773755 | orchestrator | Saturday 31 May 2025 16:22:05 +0000 (0:00:00.234) 0:08:31.207 ********** 2025-05-31 16:25:43.773762 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 16:25:43.773768 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 16:25:43.773775 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 16:25:43.773781 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.773788 | orchestrator | 2025-05-31 16:25:43.773795 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called after restart] ********* 2025-05-31 16:25:43.773801 | orchestrator | Saturday 31 May 2025 16:22:06 +0000 (0:00:00.420) 0:08:31.627 ********** 2025-05-31 16:25:43.773808 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.773819 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.773826 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.773832 | orchestrator | 2025-05-31 16:25:43.773839 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable pg autoscale on pools] *************** 2025-05-31 16:25:43.773845 | orchestrator | Saturday 31 May 2025 16:22:06 +0000 (0:00:00.574) 0:08:32.202 ********** 2025-05-31 16:25:43.773851 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.773858 | orchestrator | 2025-05-31 16:25:43.773864 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable balancer] **************************** 2025-05-31 16:25:43.773871 | orchestrator | Saturday 31 May 2025 16:22:06 +0000 (0:00:00.248) 0:08:32.451 ********** 2025-05-31 16:25:43.773897 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.773905 | orchestrator | 2025-05-31 16:25:43.773912 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-31 16:25:43.773918 | orchestrator | Saturday 31 May 2025 16:22:07 +0000 (0:00:00.226) 0:08:32.677 ********** 2025-05-31 16:25:43.773925 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:25:43.773931 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:25:43.773938 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:25:43.773949 | orchestrator | 2025-05-31 16:25:43.773956 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-05-31 16:25:43.773962 | orchestrator | 2025-05-31 16:25:43.773969 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-31 16:25:43.773975 | orchestrator | Saturday 31 May 2025 16:22:09 +0000 (0:00:02.660) 0:08:35.338 ********** 2025-05-31 16:25:43.774009 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:25:43.774063 | orchestrator | 2025-05-31 16:25:43.774071 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-31 16:25:43.774078 | orchestrator | Saturday 31 May 2025 16:22:11 +0000 (0:00:01.176) 0:08:36.514 ********** 2025-05-31 16:25:43.774084 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.774091 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.774098 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.774104 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.774111 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.774117 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.774124 | orchestrator | 2025-05-31 16:25:43.774130 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-31 16:25:43.774137 | orchestrator | Saturday 31 May 2025 16:22:11 +0000 (0:00:00.905) 0:08:37.420 ********** 2025-05-31 16:25:43.774143 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.774150 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.774156 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.774163 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.774169 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.774175 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.774182 | orchestrator | 2025-05-31 16:25:43.774188 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-31 16:25:43.774195 | orchestrator | Saturday 31 May 2025 16:22:12 +0000 (0:00:01.027) 0:08:38.448 ********** 2025-05-31 16:25:43.774202 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.774208 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.774215 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.774221 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.774228 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.774234 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.774241 | orchestrator | 2025-05-31 16:25:43.774248 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-31 16:25:43.774254 | orchestrator | Saturday 31 May 2025 16:22:14 +0000 (0:00:01.175) 0:08:39.623 ********** 2025-05-31 16:25:43.774261 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.774267 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.774274 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.774280 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.774286 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.774293 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.774299 | orchestrator | 2025-05-31 16:25:43.774306 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-31 16:25:43.774312 | orchestrator | Saturday 31 May 2025 16:22:15 +0000 (0:00:01.023) 0:08:40.647 ********** 2025-05-31 16:25:43.774319 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.774325 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.774332 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.774339 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.774345 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.774351 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.774358 | orchestrator | 2025-05-31 16:25:43.774364 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-31 16:25:43.774371 | orchestrator | Saturday 31 May 2025 16:22:16 +0000 (0:00:00.890) 0:08:41.537 ********** 2025-05-31 16:25:43.774378 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.774390 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.774397 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.774403 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.774410 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.774416 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.774423 | orchestrator | 2025-05-31 16:25:43.774429 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-31 16:25:43.774436 | orchestrator | Saturday 31 May 2025 16:22:16 +0000 (0:00:00.635) 0:08:42.173 ********** 2025-05-31 16:25:43.774442 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.774449 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.774455 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.774462 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.774468 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.774475 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.774481 | orchestrator | 2025-05-31 16:25:43.774488 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-31 16:25:43.774494 | orchestrator | Saturday 31 May 2025 16:22:17 +0000 (0:00:00.871) 0:08:43.044 ********** 2025-05-31 16:25:43.774501 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.774507 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.774514 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.774543 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.774550 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.774557 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.774563 | orchestrator | 2025-05-31 16:25:43.774570 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-31 16:25:43.774576 | orchestrator | Saturday 31 May 2025 16:22:18 +0000 (0:00:00.616) 0:08:43.661 ********** 2025-05-31 16:25:43.774583 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.774589 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.774596 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.774602 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.774608 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.774615 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.774621 | orchestrator | 2025-05-31 16:25:43.774628 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-31 16:25:43.774635 | orchestrator | Saturday 31 May 2025 16:22:18 +0000 (0:00:00.818) 0:08:44.480 ********** 2025-05-31 16:25:43.774641 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.774647 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.774654 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.774660 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.774667 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.774673 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.774680 | orchestrator | 2025-05-31 16:25:43.774686 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-31 16:25:43.774693 | orchestrator | Saturday 31 May 2025 16:22:19 +0000 (0:00:00.623) 0:08:45.103 ********** 2025-05-31 16:25:43.774700 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.774706 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.774736 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.774744 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.774750 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.774757 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.774763 | orchestrator | 2025-05-31 16:25:43.774770 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-31 16:25:43.774777 | orchestrator | Saturday 31 May 2025 16:22:20 +0000 (0:00:01.261) 0:08:46.365 ********** 2025-05-31 16:25:43.774784 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.774790 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.774797 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.774803 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.774815 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.774821 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.774828 | orchestrator | 2025-05-31 16:25:43.774834 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-31 16:25:43.774841 | orchestrator | Saturday 31 May 2025 16:22:21 +0000 (0:00:00.604) 0:08:46.970 ********** 2025-05-31 16:25:43.774848 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.774854 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.774861 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.774867 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.774874 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.774921 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.774929 | orchestrator | 2025-05-31 16:25:43.774936 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-31 16:25:43.774943 | orchestrator | Saturday 31 May 2025 16:22:22 +0000 (0:00:00.802) 0:08:47.772 ********** 2025-05-31 16:25:43.774949 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.774956 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.774963 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.774969 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.774976 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.774983 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.774989 | orchestrator | 2025-05-31 16:25:43.774996 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-31 16:25:43.775003 | orchestrator | Saturday 31 May 2025 16:22:22 +0000 (0:00:00.609) 0:08:48.381 ********** 2025-05-31 16:25:43.775009 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.775016 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.775022 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.775029 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.775036 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.775042 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.775049 | orchestrator | 2025-05-31 16:25:43.775055 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-31 16:25:43.775062 | orchestrator | Saturday 31 May 2025 16:22:23 +0000 (0:00:00.819) 0:08:49.200 ********** 2025-05-31 16:25:43.775069 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.775076 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.775082 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.775089 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.775096 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.775102 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.775109 | orchestrator | 2025-05-31 16:25:43.775116 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-31 16:25:43.775122 | orchestrator | Saturday 31 May 2025 16:22:24 +0000 (0:00:00.603) 0:08:49.804 ********** 2025-05-31 16:25:43.775129 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.775135 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.775146 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.775153 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.775159 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.775166 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.775173 | orchestrator | 2025-05-31 16:25:43.775179 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-31 16:25:43.775186 | orchestrator | Saturday 31 May 2025 16:22:25 +0000 (0:00:00.788) 0:08:50.592 ********** 2025-05-31 16:25:43.775192 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.775199 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.775206 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.775212 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.775219 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.775226 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.775232 | orchestrator | 2025-05-31 16:25:43.775239 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-31 16:25:43.775252 | orchestrator | Saturday 31 May 2025 16:22:25 +0000 (0:00:00.609) 0:08:51.201 ********** 2025-05-31 16:25:43.775259 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.775273 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.775279 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.775286 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.775292 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.775299 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.775306 | orchestrator | 2025-05-31 16:25:43.775312 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-31 16:25:43.775319 | orchestrator | Saturday 31 May 2025 16:22:26 +0000 (0:00:00.861) 0:08:52.063 ********** 2025-05-31 16:25:43.775326 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.775332 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.775339 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.775345 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.775352 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.775359 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.775365 | orchestrator | 2025-05-31 16:25:43.775372 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-31 16:25:43.775379 | orchestrator | Saturday 31 May 2025 16:22:27 +0000 (0:00:00.629) 0:08:52.692 ********** 2025-05-31 16:25:43.775386 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.775392 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.775398 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.775404 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.775411 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.775417 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.775423 | orchestrator | 2025-05-31 16:25:43.775429 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-31 16:25:43.775435 | orchestrator | Saturday 31 May 2025 16:22:28 +0000 (0:00:00.826) 0:08:53.518 ********** 2025-05-31 16:25:43.775462 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.775469 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.775475 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.775481 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.775487 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.775493 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.775500 | orchestrator | 2025-05-31 16:25:43.775506 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-31 16:25:43.775512 | orchestrator | Saturday 31 May 2025 16:22:28 +0000 (0:00:00.617) 0:08:54.136 ********** 2025-05-31 16:25:43.775518 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.775524 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.775530 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.775536 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.775542 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.775548 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.775554 | orchestrator | 2025-05-31 16:25:43.775560 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-31 16:25:43.775566 | orchestrator | Saturday 31 May 2025 16:22:29 +0000 (0:00:00.852) 0:08:54.989 ********** 2025-05-31 16:25:43.775572 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.775578 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.775584 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.775590 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.775596 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.775603 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.775609 | orchestrator | 2025-05-31 16:25:43.775615 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-31 16:25:43.775621 | orchestrator | Saturday 31 May 2025 16:22:30 +0000 (0:00:00.603) 0:08:55.592 ********** 2025-05-31 16:25:43.775627 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.775633 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.775644 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.775650 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.775656 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.775662 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.775668 | orchestrator | 2025-05-31 16:25:43.775674 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-31 16:25:43.775680 | orchestrator | Saturday 31 May 2025 16:22:30 +0000 (0:00:00.869) 0:08:56.462 ********** 2025-05-31 16:25:43.775686 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.775692 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.775699 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.775704 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.775710 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.775716 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.775722 | orchestrator | 2025-05-31 16:25:43.775729 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-31 16:25:43.775735 | orchestrator | Saturday 31 May 2025 16:22:31 +0000 (0:00:00.622) 0:08:57.084 ********** 2025-05-31 16:25:43.775741 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.775747 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.775753 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.775759 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.775765 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.775771 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.775777 | orchestrator | 2025-05-31 16:25:43.775784 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-31 16:25:43.775790 | orchestrator | Saturday 31 May 2025 16:22:32 +0000 (0:00:00.847) 0:08:57.931 ********** 2025-05-31 16:25:43.775797 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.775803 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.775809 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.775815 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.775821 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.775827 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.775833 | orchestrator | 2025-05-31 16:25:43.775839 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-31 16:25:43.775845 | orchestrator | Saturday 31 May 2025 16:22:33 +0000 (0:00:00.696) 0:08:58.628 ********** 2025-05-31 16:25:43.775851 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.775857 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.775863 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.775869 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.775875 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.775896 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.775903 | orchestrator | 2025-05-31 16:25:43.775909 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-31 16:25:43.775915 | orchestrator | Saturday 31 May 2025 16:22:33 +0000 (0:00:00.836) 0:08:59.464 ********** 2025-05-31 16:25:43.775921 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.775927 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.775933 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.775939 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.775945 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.775951 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.775958 | orchestrator | 2025-05-31 16:25:43.775964 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-31 16:25:43.775970 | orchestrator | Saturday 31 May 2025 16:22:34 +0000 (0:00:00.626) 0:09:00.091 ********** 2025-05-31 16:25:43.775976 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.775982 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.775988 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.775999 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.776005 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.776011 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.776017 | orchestrator | 2025-05-31 16:25:43.776023 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-31 16:25:43.776029 | orchestrator | Saturday 31 May 2025 16:22:35 +0000 (0:00:00.852) 0:09:00.943 ********** 2025-05-31 16:25:43.776036 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.776042 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.776048 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.776072 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.776079 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.776086 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.776092 | orchestrator | 2025-05-31 16:25:43.776098 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-31 16:25:43.776104 | orchestrator | Saturday 31 May 2025 16:22:36 +0000 (0:00:00.635) 0:09:01.579 ********** 2025-05-31 16:25:43.776110 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-31 16:25:43.776117 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-31 16:25:43.776123 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.776129 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-31 16:25:43.776135 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-31 16:25:43.776141 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.776147 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-31 16:25:43.776153 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-31 16:25:43.776159 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.776165 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-31 16:25:43.776172 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-31 16:25:43.776178 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.776184 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-31 16:25:43.776190 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-31 16:25:43.776196 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.776202 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-31 16:25:43.776208 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-31 16:25:43.776214 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.776221 | orchestrator | 2025-05-31 16:25:43.776227 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-31 16:25:43.776233 | orchestrator | Saturday 31 May 2025 16:22:37 +0000 (0:00:00.948) 0:09:02.527 ********** 2025-05-31 16:25:43.776239 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-31 16:25:43.776246 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-31 16:25:43.776252 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.776258 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-31 16:25:43.776264 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-31 16:25:43.776270 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-31 16:25:43.776276 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-31 16:25:43.776283 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.776289 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-31 16:25:43.776295 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-31 16:25:43.776301 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.776307 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-31 16:25:43.776313 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-31 16:25:43.776319 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.776325 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.776332 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-31 16:25:43.776342 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-31 16:25:43.776348 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.776355 | orchestrator | 2025-05-31 16:25:43.776361 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-31 16:25:43.776367 | orchestrator | Saturday 31 May 2025 16:22:37 +0000 (0:00:00.813) 0:09:03.341 ********** 2025-05-31 16:25:43.776373 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.776379 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.776385 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.776391 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.776397 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.776404 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.776410 | orchestrator | 2025-05-31 16:25:43.776416 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-31 16:25:43.776422 | orchestrator | Saturday 31 May 2025 16:22:38 +0000 (0:00:00.818) 0:09:04.160 ********** 2025-05-31 16:25:43.776428 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.776434 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.776440 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.776450 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.776456 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.776462 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.776468 | orchestrator | 2025-05-31 16:25:43.776474 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-31 16:25:43.776481 | orchestrator | Saturday 31 May 2025 16:22:39 +0000 (0:00:00.624) 0:09:04.785 ********** 2025-05-31 16:25:43.776487 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.776493 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.776499 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.776505 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.776511 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.776517 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.776523 | orchestrator | 2025-05-31 16:25:43.776529 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-31 16:25:43.776535 | orchestrator | Saturday 31 May 2025 16:22:40 +0000 (0:00:00.938) 0:09:05.723 ********** 2025-05-31 16:25:43.776541 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.776547 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.776554 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.776560 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.776566 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.776572 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.776578 | orchestrator | 2025-05-31 16:25:43.776584 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-31 16:25:43.776590 | orchestrator | Saturday 31 May 2025 16:22:41 +0000 (0:00:00.848) 0:09:06.572 ********** 2025-05-31 16:25:43.776613 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.776621 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.776627 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.776633 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.776639 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.776645 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.776651 | orchestrator | 2025-05-31 16:25:43.776657 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-31 16:25:43.776663 | orchestrator | Saturday 31 May 2025 16:22:42 +0000 (0:00:01.005) 0:09:07.577 ********** 2025-05-31 16:25:43.776669 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.776675 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.776681 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.776687 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.776694 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.776700 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.776711 | orchestrator | 2025-05-31 16:25:43.776717 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-31 16:25:43.776723 | orchestrator | Saturday 31 May 2025 16:22:42 +0000 (0:00:00.606) 0:09:08.183 ********** 2025-05-31 16:25:43.776729 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-31 16:25:43.776735 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-31 16:25:43.776741 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-31 16:25:43.776748 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.776754 | orchestrator | 2025-05-31 16:25:43.776760 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-31 16:25:43.776766 | orchestrator | Saturday 31 May 2025 16:22:43 +0000 (0:00:00.333) 0:09:08.516 ********** 2025-05-31 16:25:43.776772 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-31 16:25:43.776778 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-31 16:25:43.776784 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-31 16:25:43.776790 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.776796 | orchestrator | 2025-05-31 16:25:43.776803 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-31 16:25:43.776809 | orchestrator | Saturday 31 May 2025 16:22:43 +0000 (0:00:00.414) 0:09:08.931 ********** 2025-05-31 16:25:43.776815 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-31 16:25:43.776821 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-31 16:25:43.776827 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-31 16:25:43.776833 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.776839 | orchestrator | 2025-05-31 16:25:43.776845 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-31 16:25:43.776852 | orchestrator | Saturday 31 May 2025 16:22:44 +0000 (0:00:00.646) 0:09:09.577 ********** 2025-05-31 16:25:43.776858 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.776864 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.776870 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.776876 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.776896 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.776902 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.776908 | orchestrator | 2025-05-31 16:25:43.776914 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-31 16:25:43.776920 | orchestrator | Saturday 31 May 2025 16:22:44 +0000 (0:00:00.763) 0:09:10.341 ********** 2025-05-31 16:25:43.776926 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-31 16:25:43.776932 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.776939 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-31 16:25:43.776945 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.776951 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-31 16:25:43.776957 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.776963 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-31 16:25:43.776969 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.776975 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-31 16:25:43.776981 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.776987 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-31 16:25:43.776993 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.776999 | orchestrator | 2025-05-31 16:25:43.777006 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-31 16:25:43.777027 | orchestrator | Saturday 31 May 2025 16:22:45 +0000 (0:00:00.657) 0:09:10.998 ********** 2025-05-31 16:25:43.777034 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.777040 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.777046 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.777052 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.777063 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.777069 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.777075 | orchestrator | 2025-05-31 16:25:43.777081 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-31 16:25:43.777087 | orchestrator | Saturday 31 May 2025 16:22:46 +0000 (0:00:00.696) 0:09:11.695 ********** 2025-05-31 16:25:43.777093 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.777099 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.777105 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.777111 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.777117 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.777123 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.777129 | orchestrator | 2025-05-31 16:25:43.777135 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-31 16:25:43.777141 | orchestrator | Saturday 31 May 2025 16:22:46 +0000 (0:00:00.589) 0:09:12.284 ********** 2025-05-31 16:25:43.777148 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-31 16:25:43.777154 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-31 16:25:43.777160 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.777166 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.777172 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-31 16:25:43.777178 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.777204 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-31 16:25:43.777211 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.777217 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-31 16:25:43.777223 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.777229 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-31 16:25:43.777235 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.777241 | orchestrator | 2025-05-31 16:25:43.777248 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-31 16:25:43.777254 | orchestrator | Saturday 31 May 2025 16:22:47 +0000 (0:00:01.069) 0:09:13.354 ********** 2025-05-31 16:25:43.777260 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.777266 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.777272 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.777278 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-31 16:25:43.777284 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.777290 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-31 16:25:43.777297 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.777303 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-31 16:25:43.777309 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.777315 | orchestrator | 2025-05-31 16:25:43.777321 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-31 16:25:43.777328 | orchestrator | Saturday 31 May 2025 16:22:48 +0000 (0:00:00.614) 0:09:13.968 ********** 2025-05-31 16:25:43.777334 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-31 16:25:43.777340 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-31 16:25:43.777346 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-31 16:25:43.777352 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.777358 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-31 16:25:43.777364 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-31 16:25:43.777371 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-31 16:25:43.777377 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.777383 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-31 16:25:43.777406 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-31 16:25:43.777413 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-31 16:25:43.777419 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.777426 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 16:25:43.777432 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 16:25:43.777438 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 16:25:43.777444 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-31 16:25:43.777450 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-31 16:25:43.777456 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.777462 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-31 16:25:43.777468 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.777474 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-31 16:25:43.777480 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-31 16:25:43.777486 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-31 16:25:43.777492 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.777499 | orchestrator | 2025-05-31 16:25:43.777505 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-31 16:25:43.777511 | orchestrator | Saturday 31 May 2025 16:22:49 +0000 (0:00:01.100) 0:09:15.068 ********** 2025-05-31 16:25:43.777517 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.777523 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.777529 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.777535 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.777542 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.777559 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.777566 | orchestrator | 2025-05-31 16:25:43.777572 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-31 16:25:43.777578 | orchestrator | Saturday 31 May 2025 16:22:50 +0000 (0:00:00.999) 0:09:16.067 ********** 2025-05-31 16:25:43.777584 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.777590 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.777596 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.777602 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-31 16:25:43.777608 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.777614 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-31 16:25:43.777620 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.777626 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-31 16:25:43.777632 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.777639 | orchestrator | 2025-05-31 16:25:43.777645 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-31 16:25:43.777651 | orchestrator | Saturday 31 May 2025 16:22:51 +0000 (0:00:01.095) 0:09:17.163 ********** 2025-05-31 16:25:43.777657 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.777663 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.777669 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.777675 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.777681 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.777687 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.777693 | orchestrator | 2025-05-31 16:25:43.777699 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-31 16:25:43.777709 | orchestrator | Saturday 31 May 2025 16:22:52 +0000 (0:00:01.113) 0:09:18.277 ********** 2025-05-31 16:25:43.777715 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:43.777721 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:43.777727 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:43.777733 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.777739 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.777761 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.777768 | orchestrator | 2025-05-31 16:25:43.777774 | orchestrator | TASK [ceph-crash : create client.crash keyring] ******************************** 2025-05-31 16:25:43.777780 | orchestrator | Saturday 31 May 2025 16:22:53 +0000 (0:00:01.093) 0:09:19.370 ********** 2025-05-31 16:25:43.777786 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:43.777792 | orchestrator | 2025-05-31 16:25:43.777798 | orchestrator | TASK [ceph-crash : get keys from monitors] ************************************* 2025-05-31 16:25:43.777804 | orchestrator | Saturday 31 May 2025 16:22:57 +0000 (0:00:03.658) 0:09:23.029 ********** 2025-05-31 16:25:43.777810 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.777816 | orchestrator | 2025-05-31 16:25:43.777822 | orchestrator | TASK [ceph-crash : copy ceph key(s) if needed] ********************************* 2025-05-31 16:25:43.777829 | orchestrator | Saturday 31 May 2025 16:22:59 +0000 (0:00:01.649) 0:09:24.680 ********** 2025-05-31 16:25:43.777835 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.777841 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:25:43.777847 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:25:43.777853 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:25:43.777859 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:25:43.777865 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:25:43.777871 | orchestrator | 2025-05-31 16:25:43.777917 | orchestrator | TASK [ceph-crash : create /var/lib/ceph/crash/posted] ************************** 2025-05-31 16:25:43.777924 | orchestrator | Saturday 31 May 2025 16:23:00 +0000 (0:00:01.803) 0:09:26.484 ********** 2025-05-31 16:25:43.777930 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:43.777936 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:25:43.777942 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:25:43.777949 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:25:43.777955 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:25:43.777961 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:25:43.777967 | orchestrator | 2025-05-31 16:25:43.777973 | orchestrator | TASK [ceph-crash : include_tasks systemd.yml] ********************************** 2025-05-31 16:25:43.777979 | orchestrator | Saturday 31 May 2025 16:23:02 +0000 (0:00:01.276) 0:09:27.760 ********** 2025-05-31 16:25:43.777985 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:25:43.777992 | orchestrator | 2025-05-31 16:25:43.777998 | orchestrator | TASK [ceph-crash : generate systemd unit file for ceph-crash container] ******** 2025-05-31 16:25:43.778004 | orchestrator | Saturday 31 May 2025 16:23:03 +0000 (0:00:01.022) 0:09:28.783 ********** 2025-05-31 16:25:43.778010 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:43.778034 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:25:43.778041 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:25:43.778048 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:25:43.778054 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:25:43.778060 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:25:43.778066 | orchestrator | 2025-05-31 16:25:43.778072 | orchestrator | TASK [ceph-crash : start the ceph-crash service] ******************************* 2025-05-31 16:25:43.778078 | orchestrator | Saturday 31 May 2025 16:23:05 +0000 (0:00:01.725) 0:09:30.508 ********** 2025-05-31 16:25:43.778084 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:25:43.778090 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:43.778096 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:25:43.778102 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:25:43.778108 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:25:43.778114 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:25:43.778120 | orchestrator | 2025-05-31 16:25:43.778126 | orchestrator | RUNNING HANDLER [ceph-handler : ceph crash handler] **************************** 2025-05-31 16:25:43.778132 | orchestrator | Saturday 31 May 2025 16:23:08 +0000 (0:00:03.943) 0:09:34.451 ********** 2025-05-31 16:25:43.778138 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:25:43.778150 | orchestrator | 2025-05-31 16:25:43.778156 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called before restart] ****** 2025-05-31 16:25:43.778166 | orchestrator | Saturday 31 May 2025 16:23:10 +0000 (0:00:01.251) 0:09:35.703 ********** 2025-05-31 16:25:43.778172 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.778178 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.778184 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.778190 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.778196 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.778202 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.778208 | orchestrator | 2025-05-31 16:25:43.778214 | orchestrator | RUNNING HANDLER [ceph-handler : restart the ceph-crash service] **************** 2025-05-31 16:25:43.778220 | orchestrator | Saturday 31 May 2025 16:23:10 +0000 (0:00:00.682) 0:09:36.386 ********** 2025-05-31 16:25:43.778226 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:43.778232 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:25:43.778238 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:25:43.778244 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:25:43.778250 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:25:43.778256 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:25:43.778262 | orchestrator | 2025-05-31 16:25:43.778268 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called after restart] ******* 2025-05-31 16:25:43.778274 | orchestrator | Saturday 31 May 2025 16:23:13 +0000 (0:00:02.953) 0:09:39.340 ********** 2025-05-31 16:25:43.778280 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:43.778287 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:43.778293 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:43.778299 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.778305 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.778311 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.778317 | orchestrator | 2025-05-31 16:25:43.778323 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-05-31 16:25:43.778329 | orchestrator | 2025-05-31 16:25:43.778335 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-31 16:25:43.778346 | orchestrator | Saturday 31 May 2025 16:23:16 +0000 (0:00:02.451) 0:09:41.791 ********** 2025-05-31 16:25:43.778353 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:25:43.778359 | orchestrator | 2025-05-31 16:25:43.778365 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-31 16:25:43.778371 | orchestrator | Saturday 31 May 2025 16:23:17 +0000 (0:00:00.729) 0:09:42.521 ********** 2025-05-31 16:25:43.778377 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.778383 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.778389 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.778395 | orchestrator | 2025-05-31 16:25:43.778401 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-31 16:25:43.778408 | orchestrator | Saturday 31 May 2025 16:23:17 +0000 (0:00:00.300) 0:09:42.821 ********** 2025-05-31 16:25:43.778414 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.778420 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.778426 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.778432 | orchestrator | 2025-05-31 16:25:43.778438 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-31 16:25:43.778444 | orchestrator | Saturday 31 May 2025 16:23:18 +0000 (0:00:00.728) 0:09:43.550 ********** 2025-05-31 16:25:43.778450 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.778456 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.778462 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.778468 | orchestrator | 2025-05-31 16:25:43.778474 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-31 16:25:43.778481 | orchestrator | Saturday 31 May 2025 16:23:18 +0000 (0:00:00.830) 0:09:44.380 ********** 2025-05-31 16:25:43.778491 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.778497 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.778503 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.778509 | orchestrator | 2025-05-31 16:25:43.778515 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-31 16:25:43.778522 | orchestrator | Saturday 31 May 2025 16:23:19 +0000 (0:00:00.983) 0:09:45.364 ********** 2025-05-31 16:25:43.778528 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.778534 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.778540 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.778546 | orchestrator | 2025-05-31 16:25:43.778552 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-31 16:25:43.778558 | orchestrator | Saturday 31 May 2025 16:23:20 +0000 (0:00:00.321) 0:09:45.686 ********** 2025-05-31 16:25:43.778564 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.778570 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.778576 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.778582 | orchestrator | 2025-05-31 16:25:43.778588 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-31 16:25:43.778594 | orchestrator | Saturday 31 May 2025 16:23:20 +0000 (0:00:00.302) 0:09:45.988 ********** 2025-05-31 16:25:43.778600 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.778607 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.778613 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.778619 | orchestrator | 2025-05-31 16:25:43.778625 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-31 16:25:43.778631 | orchestrator | Saturday 31 May 2025 16:23:21 +0000 (0:00:00.550) 0:09:46.538 ********** 2025-05-31 16:25:43.778637 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.778643 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.778649 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.778655 | orchestrator | 2025-05-31 16:25:43.778661 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-31 16:25:43.778667 | orchestrator | Saturday 31 May 2025 16:23:21 +0000 (0:00:00.316) 0:09:46.855 ********** 2025-05-31 16:25:43.778673 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.778679 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.778685 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.778691 | orchestrator | 2025-05-31 16:25:43.778697 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-31 16:25:43.778703 | orchestrator | Saturday 31 May 2025 16:23:21 +0000 (0:00:00.317) 0:09:47.173 ********** 2025-05-31 16:25:43.778709 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.778715 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.778722 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.778728 | orchestrator | 2025-05-31 16:25:43.778739 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-31 16:25:43.778746 | orchestrator | Saturday 31 May 2025 16:23:21 +0000 (0:00:00.319) 0:09:47.492 ********** 2025-05-31 16:25:43.778752 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.778758 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.778764 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.778770 | orchestrator | 2025-05-31 16:25:43.778776 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-31 16:25:43.778782 | orchestrator | Saturday 31 May 2025 16:23:22 +0000 (0:00:00.937) 0:09:48.430 ********** 2025-05-31 16:25:43.778788 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.778795 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.778801 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.778807 | orchestrator | 2025-05-31 16:25:43.778813 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-31 16:25:43.778819 | orchestrator | Saturday 31 May 2025 16:23:23 +0000 (0:00:00.316) 0:09:48.747 ********** 2025-05-31 16:25:43.778825 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.778836 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.778842 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.778848 | orchestrator | 2025-05-31 16:25:43.778854 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-31 16:25:43.778860 | orchestrator | Saturday 31 May 2025 16:23:23 +0000 (0:00:00.300) 0:09:49.047 ********** 2025-05-31 16:25:43.778866 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.778872 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.778909 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.778917 | orchestrator | 2025-05-31 16:25:43.778923 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-31 16:25:43.778933 | orchestrator | Saturday 31 May 2025 16:23:23 +0000 (0:00:00.331) 0:09:49.379 ********** 2025-05-31 16:25:43.778940 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.778946 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.778952 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.778958 | orchestrator | 2025-05-31 16:25:43.778964 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-31 16:25:43.778971 | orchestrator | Saturday 31 May 2025 16:23:24 +0000 (0:00:00.593) 0:09:49.972 ********** 2025-05-31 16:25:43.778977 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.778983 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.778989 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.778995 | orchestrator | 2025-05-31 16:25:43.779001 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-31 16:25:43.779008 | orchestrator | Saturday 31 May 2025 16:23:24 +0000 (0:00:00.364) 0:09:50.337 ********** 2025-05-31 16:25:43.779014 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.779020 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.779026 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.779032 | orchestrator | 2025-05-31 16:25:43.779038 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-31 16:25:43.779044 | orchestrator | Saturday 31 May 2025 16:23:25 +0000 (0:00:00.303) 0:09:50.640 ********** 2025-05-31 16:25:43.779050 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.779057 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.779063 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.779069 | orchestrator | 2025-05-31 16:25:43.779075 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-31 16:25:43.779081 | orchestrator | Saturday 31 May 2025 16:23:25 +0000 (0:00:00.342) 0:09:50.983 ********** 2025-05-31 16:25:43.779087 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.779093 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.779099 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.779105 | orchestrator | 2025-05-31 16:25:43.779111 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-31 16:25:43.779117 | orchestrator | Saturday 31 May 2025 16:23:26 +0000 (0:00:00.589) 0:09:51.573 ********** 2025-05-31 16:25:43.779124 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.779130 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.779136 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.779142 | orchestrator | 2025-05-31 16:25:43.779148 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-31 16:25:43.779153 | orchestrator | Saturday 31 May 2025 16:23:26 +0000 (0:00:00.398) 0:09:51.971 ********** 2025-05-31 16:25:43.779158 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.779164 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.779169 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.779175 | orchestrator | 2025-05-31 16:25:43.779180 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-31 16:25:43.779186 | orchestrator | Saturday 31 May 2025 16:23:26 +0000 (0:00:00.347) 0:09:52.319 ********** 2025-05-31 16:25:43.779191 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.779197 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.779202 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.779212 | orchestrator | 2025-05-31 16:25:43.779217 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-31 16:25:43.779222 | orchestrator | Saturday 31 May 2025 16:23:27 +0000 (0:00:00.375) 0:09:52.695 ********** 2025-05-31 16:25:43.779228 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.779233 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.779239 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.779244 | orchestrator | 2025-05-31 16:25:43.779250 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-31 16:25:43.779256 | orchestrator | Saturday 31 May 2025 16:23:27 +0000 (0:00:00.644) 0:09:53.339 ********** 2025-05-31 16:25:43.779261 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.779267 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.779272 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.779278 | orchestrator | 2025-05-31 16:25:43.779283 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-31 16:25:43.779289 | orchestrator | Saturday 31 May 2025 16:23:28 +0000 (0:00:00.306) 0:09:53.646 ********** 2025-05-31 16:25:43.779295 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.779300 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.779305 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.779311 | orchestrator | 2025-05-31 16:25:43.779320 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-31 16:25:43.779326 | orchestrator | Saturday 31 May 2025 16:23:28 +0000 (0:00:00.323) 0:09:53.970 ********** 2025-05-31 16:25:43.779331 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.779336 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.779342 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.779347 | orchestrator | 2025-05-31 16:25:43.779352 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-31 16:25:43.779358 | orchestrator | Saturday 31 May 2025 16:23:28 +0000 (0:00:00.292) 0:09:54.263 ********** 2025-05-31 16:25:43.779363 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.779368 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.779374 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.779379 | orchestrator | 2025-05-31 16:25:43.779385 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-31 16:25:43.779390 | orchestrator | Saturday 31 May 2025 16:23:29 +0000 (0:00:00.524) 0:09:54.788 ********** 2025-05-31 16:25:43.779396 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.779402 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.779407 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.779412 | orchestrator | 2025-05-31 16:25:43.779418 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-31 16:25:43.779423 | orchestrator | Saturday 31 May 2025 16:23:29 +0000 (0:00:00.326) 0:09:55.115 ********** 2025-05-31 16:25:43.779429 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.779434 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.779440 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.779445 | orchestrator | 2025-05-31 16:25:43.779454 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-31 16:25:43.779460 | orchestrator | Saturday 31 May 2025 16:23:29 +0000 (0:00:00.337) 0:09:55.452 ********** 2025-05-31 16:25:43.779465 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.779470 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.779476 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.779481 | orchestrator | 2025-05-31 16:25:43.779487 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-31 16:25:43.779495 | orchestrator | Saturday 31 May 2025 16:23:30 +0000 (0:00:00.328) 0:09:55.780 ********** 2025-05-31 16:25:43.779505 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.779514 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.779546 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.779558 | orchestrator | 2025-05-31 16:25:43.779567 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-31 16:25:43.779576 | orchestrator | Saturday 31 May 2025 16:23:30 +0000 (0:00:00.620) 0:09:56.400 ********** 2025-05-31 16:25:43.779585 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.779593 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.779601 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.779610 | orchestrator | 2025-05-31 16:25:43.779619 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-31 16:25:43.779627 | orchestrator | Saturday 31 May 2025 16:23:31 +0000 (0:00:00.391) 0:09:56.792 ********** 2025-05-31 16:25:43.779637 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-31 16:25:43.779646 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-31 16:25:43.779655 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.779664 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-31 16:25:43.779673 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-31 16:25:43.779682 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.779691 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-31 16:25:43.779701 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-31 16:25:43.779707 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.779712 | orchestrator | 2025-05-31 16:25:43.779717 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-31 16:25:43.779723 | orchestrator | Saturday 31 May 2025 16:23:31 +0000 (0:00:00.464) 0:09:57.256 ********** 2025-05-31 16:25:43.779728 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-31 16:25:43.779734 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-31 16:25:43.779739 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.779744 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-31 16:25:43.779750 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-31 16:25:43.779755 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.779761 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-31 16:25:43.779766 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-31 16:25:43.779771 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.779777 | orchestrator | 2025-05-31 16:25:43.779782 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-31 16:25:43.779788 | orchestrator | Saturday 31 May 2025 16:23:32 +0000 (0:00:00.396) 0:09:57.653 ********** 2025-05-31 16:25:43.779793 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.779798 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.779804 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.779809 | orchestrator | 2025-05-31 16:25:43.779814 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-31 16:25:43.779820 | orchestrator | Saturday 31 May 2025 16:23:32 +0000 (0:00:00.595) 0:09:58.249 ********** 2025-05-31 16:25:43.779825 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.779831 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.779836 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.779842 | orchestrator | 2025-05-31 16:25:43.779847 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-31 16:25:43.779853 | orchestrator | Saturday 31 May 2025 16:23:33 +0000 (0:00:00.367) 0:09:58.617 ********** 2025-05-31 16:25:43.779858 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.779864 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.779874 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.779892 | orchestrator | 2025-05-31 16:25:43.779898 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-31 16:25:43.779903 | orchestrator | Saturday 31 May 2025 16:23:33 +0000 (0:00:00.370) 0:09:58.987 ********** 2025-05-31 16:25:43.779915 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.779920 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.779926 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.779931 | orchestrator | 2025-05-31 16:25:43.779936 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-31 16:25:43.779942 | orchestrator | Saturday 31 May 2025 16:23:33 +0000 (0:00:00.388) 0:09:59.375 ********** 2025-05-31 16:25:43.779947 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.779953 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.779958 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.779964 | orchestrator | 2025-05-31 16:25:43.779969 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-31 16:25:43.779975 | orchestrator | Saturday 31 May 2025 16:23:34 +0000 (0:00:00.804) 0:10:00.179 ********** 2025-05-31 16:25:43.779980 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.779985 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.779991 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.779996 | orchestrator | 2025-05-31 16:25:43.780002 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-31 16:25:43.780007 | orchestrator | Saturday 31 May 2025 16:23:35 +0000 (0:00:00.357) 0:10:00.537 ********** 2025-05-31 16:25:43.780012 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 16:25:43.780024 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 16:25:43.780030 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 16:25:43.780035 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.780041 | orchestrator | 2025-05-31 16:25:43.780046 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-31 16:25:43.780052 | orchestrator | Saturday 31 May 2025 16:23:35 +0000 (0:00:00.429) 0:10:00.966 ********** 2025-05-31 16:25:43.780057 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 16:25:43.780063 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 16:25:43.780068 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 16:25:43.780074 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.780079 | orchestrator | 2025-05-31 16:25:43.780084 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-31 16:25:43.780090 | orchestrator | Saturday 31 May 2025 16:23:35 +0000 (0:00:00.398) 0:10:01.365 ********** 2025-05-31 16:25:43.780095 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 16:25:43.780100 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 16:25:43.780106 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 16:25:43.780111 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.780117 | orchestrator | 2025-05-31 16:25:43.780123 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-31 16:25:43.780128 | orchestrator | Saturday 31 May 2025 16:23:36 +0000 (0:00:00.376) 0:10:01.741 ********** 2025-05-31 16:25:43.780134 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.780139 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.780144 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.780150 | orchestrator | 2025-05-31 16:25:43.780155 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-31 16:25:43.780161 | orchestrator | Saturday 31 May 2025 16:23:36 +0000 (0:00:00.275) 0:10:02.017 ********** 2025-05-31 16:25:43.780166 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-31 16:25:43.780172 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.780178 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-31 16:25:43.780183 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.780188 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-31 16:25:43.780194 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.780214 | orchestrator | 2025-05-31 16:25:43.780221 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-31 16:25:43.780226 | orchestrator | Saturday 31 May 2025 16:23:37 +0000 (0:00:00.621) 0:10:02.638 ********** 2025-05-31 16:25:43.780232 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.780237 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.780243 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.780248 | orchestrator | 2025-05-31 16:25:43.780253 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-31 16:25:43.780259 | orchestrator | Saturday 31 May 2025 16:23:37 +0000 (0:00:00.235) 0:10:02.874 ********** 2025-05-31 16:25:43.780265 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.780270 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.780275 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.780281 | orchestrator | 2025-05-31 16:25:43.780286 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-31 16:25:43.780292 | orchestrator | Saturday 31 May 2025 16:23:37 +0000 (0:00:00.266) 0:10:03.141 ********** 2025-05-31 16:25:43.780297 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-31 16:25:43.780303 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.780308 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-31 16:25:43.780313 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.780319 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-31 16:25:43.780324 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.780330 | orchestrator | 2025-05-31 16:25:43.780335 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-31 16:25:43.780341 | orchestrator | Saturday 31 May 2025 16:23:37 +0000 (0:00:00.328) 0:10:03.469 ********** 2025-05-31 16:25:43.780346 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-31 16:25:43.780352 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.780368 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-31 16:25:43.780374 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.780379 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-31 16:25:43.780385 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.780390 | orchestrator | 2025-05-31 16:25:43.780395 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-31 16:25:43.780401 | orchestrator | Saturday 31 May 2025 16:23:38 +0000 (0:00:00.462) 0:10:03.931 ********** 2025-05-31 16:25:43.780407 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 16:25:43.780412 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 16:25:43.780418 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 16:25:43.780423 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-31 16:25:43.780428 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-31 16:25:43.780434 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-31 16:25:43.780439 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.780445 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.780450 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-31 16:25:43.780455 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-31 16:25:43.780464 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-31 16:25:43.780470 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.780475 | orchestrator | 2025-05-31 16:25:43.780481 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-31 16:25:43.780486 | orchestrator | Saturday 31 May 2025 16:23:38 +0000 (0:00:00.516) 0:10:04.448 ********** 2025-05-31 16:25:43.780503 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.780508 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.780514 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.780519 | orchestrator | 2025-05-31 16:25:43.780525 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-31 16:25:43.780530 | orchestrator | Saturday 31 May 2025 16:23:39 +0000 (0:00:00.606) 0:10:05.054 ********** 2025-05-31 16:25:43.780535 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-31 16:25:43.780541 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.780546 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-31 16:25:43.780552 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.780557 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-31 16:25:43.780562 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.780568 | orchestrator | 2025-05-31 16:25:43.780573 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-31 16:25:43.780579 | orchestrator | Saturday 31 May 2025 16:23:40 +0000 (0:00:00.489) 0:10:05.544 ********** 2025-05-31 16:25:43.780584 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.780589 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.780595 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.780600 | orchestrator | 2025-05-31 16:25:43.780606 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-31 16:25:43.780611 | orchestrator | Saturday 31 May 2025 16:23:40 +0000 (0:00:00.619) 0:10:06.163 ********** 2025-05-31 16:25:43.780616 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.780622 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.780627 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.780633 | orchestrator | 2025-05-31 16:25:43.780638 | orchestrator | TASK [ceph-mds : include create_mds_filesystems.yml] *************************** 2025-05-31 16:25:43.780644 | orchestrator | Saturday 31 May 2025 16:23:41 +0000 (0:00:00.457) 0:10:06.621 ********** 2025-05-31 16:25:43.780649 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.780654 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.780660 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-05-31 16:25:43.780665 | orchestrator | 2025-05-31 16:25:43.780670 | orchestrator | TASK [ceph-facts : get current default crush rule details] ********************* 2025-05-31 16:25:43.780676 | orchestrator | Saturday 31 May 2025 16:23:41 +0000 (0:00:00.365) 0:10:06.986 ********** 2025-05-31 16:25:43.780681 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-31 16:25:43.780686 | orchestrator | 2025-05-31 16:25:43.780692 | orchestrator | TASK [ceph-facts : get current default crush rule name] ************************ 2025-05-31 16:25:43.780697 | orchestrator | Saturday 31 May 2025 16:23:43 +0000 (0:00:01.860) 0:10:08.847 ********** 2025-05-31 16:25:43.780705 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-05-31 16:25:43.780712 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.780717 | orchestrator | 2025-05-31 16:25:43.780723 | orchestrator | TASK [ceph-mds : create filesystem pools] ************************************** 2025-05-31 16:25:43.780728 | orchestrator | Saturday 31 May 2025 16:23:43 +0000 (0:00:00.415) 0:10:09.263 ********** 2025-05-31 16:25:43.780735 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-31 16:25:43.780748 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-31 16:25:43.780761 | orchestrator | 2025-05-31 16:25:43.780767 | orchestrator | TASK [ceph-mds : create ceph filesystem] *************************************** 2025-05-31 16:25:43.780772 | orchestrator | Saturday 31 May 2025 16:23:50 +0000 (0:00:06.844) 0:10:16.107 ********** 2025-05-31 16:25:43.780777 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-31 16:25:43.780782 | orchestrator | 2025-05-31 16:25:43.780788 | orchestrator | TASK [ceph-mds : include common.yml] ******************************************* 2025-05-31 16:25:43.780793 | orchestrator | Saturday 31 May 2025 16:23:53 +0000 (0:00:02.965) 0:10:19.072 ********** 2025-05-31 16:25:43.780798 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:25:43.780804 | orchestrator | 2025-05-31 16:25:43.780809 | orchestrator | TASK [ceph-mds : create bootstrap-mds and mds directories] ********************* 2025-05-31 16:25:43.780815 | orchestrator | Saturday 31 May 2025 16:23:54 +0000 (0:00:00.581) 0:10:19.654 ********** 2025-05-31 16:25:43.780820 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-31 16:25:43.780825 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-31 16:25:43.780830 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-31 16:25:43.780836 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-05-31 16:25:43.780844 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-05-31 16:25:43.780850 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-05-31 16:25:43.780855 | orchestrator | 2025-05-31 16:25:43.780860 | orchestrator | TASK [ceph-mds : get keys from monitors] *************************************** 2025-05-31 16:25:43.780866 | orchestrator | Saturday 31 May 2025 16:23:55 +0000 (0:00:01.111) 0:10:20.765 ********** 2025-05-31 16:25:43.780871 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 16:25:43.780876 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-31 16:25:43.780893 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-31 16:25:43.780898 | orchestrator | 2025-05-31 16:25:43.780904 | orchestrator | TASK [ceph-mds : copy ceph key(s) if needed] *********************************** 2025-05-31 16:25:43.780909 | orchestrator | Saturday 31 May 2025 16:23:57 +0000 (0:00:02.002) 0:10:22.767 ********** 2025-05-31 16:25:43.780914 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-31 16:25:43.780920 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-31 16:25:43.780925 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:25:43.780931 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-31 16:25:43.780936 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-31 16:25:43.780942 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:25:43.780947 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-31 16:25:43.780952 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-31 16:25:43.780958 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:25:43.780963 | orchestrator | 2025-05-31 16:25:43.780968 | orchestrator | TASK [ceph-mds : non_containerized.yml] **************************************** 2025-05-31 16:25:43.780974 | orchestrator | Saturday 31 May 2025 16:23:58 +0000 (0:00:01.160) 0:10:23.928 ********** 2025-05-31 16:25:43.780979 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.780984 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.780990 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.780995 | orchestrator | 2025-05-31 16:25:43.781000 | orchestrator | TASK [ceph-mds : containerized.yml] ******************************************** 2025-05-31 16:25:43.781006 | orchestrator | Saturday 31 May 2025 16:23:59 +0000 (0:00:00.602) 0:10:24.530 ********** 2025-05-31 16:25:43.781011 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:25:43.781016 | orchestrator | 2025-05-31 16:25:43.781022 | orchestrator | TASK [ceph-mds : include_tasks systemd.yml] ************************************ 2025-05-31 16:25:43.781031 | orchestrator | Saturday 31 May 2025 16:23:59 +0000 (0:00:00.521) 0:10:25.052 ********** 2025-05-31 16:25:43.781036 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:25:43.781042 | orchestrator | 2025-05-31 16:25:43.781047 | orchestrator | TASK [ceph-mds : generate systemd unit file] *********************************** 2025-05-31 16:25:43.781052 | orchestrator | Saturday 31 May 2025 16:24:00 +0000 (0:00:00.724) 0:10:25.776 ********** 2025-05-31 16:25:43.781058 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:25:43.781063 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:25:43.781068 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:25:43.781074 | orchestrator | 2025-05-31 16:25:43.781079 | orchestrator | TASK [ceph-mds : generate systemd ceph-mds target file] ************************ 2025-05-31 16:25:43.781084 | orchestrator | Saturday 31 May 2025 16:24:01 +0000 (0:00:01.314) 0:10:27.091 ********** 2025-05-31 16:25:43.781090 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:25:43.781095 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:25:43.781101 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:25:43.781106 | orchestrator | 2025-05-31 16:25:43.781111 | orchestrator | TASK [ceph-mds : enable ceph-mds.target] *************************************** 2025-05-31 16:25:43.781117 | orchestrator | Saturday 31 May 2025 16:24:02 +0000 (0:00:01.129) 0:10:28.220 ********** 2025-05-31 16:25:43.781122 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:25:43.781127 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:25:43.781132 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:25:43.781138 | orchestrator | 2025-05-31 16:25:43.781143 | orchestrator | TASK [ceph-mds : systemd start mds container] ********************************** 2025-05-31 16:25:43.781148 | orchestrator | Saturday 31 May 2025 16:24:04 +0000 (0:00:01.943) 0:10:30.164 ********** 2025-05-31 16:25:43.781154 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:25:43.781159 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:25:43.781168 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:25:43.781173 | orchestrator | 2025-05-31 16:25:43.781178 | orchestrator | TASK [ceph-mds : wait for mds socket to exist] ********************************* 2025-05-31 16:25:43.781184 | orchestrator | Saturday 31 May 2025 16:24:06 +0000 (0:00:01.900) 0:10:32.064 ********** 2025-05-31 16:25:43.781189 | orchestrator | FAILED - RETRYING: [testbed-node-3]: wait for mds socket to exist (5 retries left). 2025-05-31 16:25:43.781195 | orchestrator | FAILED - RETRYING: [testbed-node-4]: wait for mds socket to exist (5 retries left). 2025-05-31 16:25:43.781200 | orchestrator | FAILED - RETRYING: [testbed-node-5]: wait for mds socket to exist (5 retries left). 2025-05-31 16:25:43.781205 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.781211 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.781216 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.781221 | orchestrator | 2025-05-31 16:25:43.781226 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-31 16:25:43.781232 | orchestrator | Saturday 31 May 2025 16:24:23 +0000 (0:00:17.032) 0:10:49.096 ********** 2025-05-31 16:25:43.781237 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:25:43.781242 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:25:43.781247 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:25:43.781253 | orchestrator | 2025-05-31 16:25:43.781258 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-05-31 16:25:43.781263 | orchestrator | Saturday 31 May 2025 16:24:24 +0000 (0:00:00.692) 0:10:49.789 ********** 2025-05-31 16:25:43.781268 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:25:43.781274 | orchestrator | 2025-05-31 16:25:43.781282 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called before restart] ******** 2025-05-31 16:25:43.781288 | orchestrator | Saturday 31 May 2025 16:24:25 +0000 (0:00:00.713) 0:10:50.502 ********** 2025-05-31 16:25:43.781293 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.781299 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.781308 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.781313 | orchestrator | 2025-05-31 16:25:43.781319 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-05-31 16:25:43.781324 | orchestrator | Saturday 31 May 2025 16:24:25 +0000 (0:00:00.333) 0:10:50.835 ********** 2025-05-31 16:25:43.781329 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:25:43.781335 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:25:43.781340 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:25:43.781345 | orchestrator | 2025-05-31 16:25:43.781350 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mds daemon(s)] ******************** 2025-05-31 16:25:43.781356 | orchestrator | Saturday 31 May 2025 16:24:26 +0000 (0:00:01.177) 0:10:52.013 ********** 2025-05-31 16:25:43.781361 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 16:25:43.781366 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 16:25:43.781372 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 16:25:43.781377 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.781382 | orchestrator | 2025-05-31 16:25:43.781388 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-05-31 16:25:43.781393 | orchestrator | Saturday 31 May 2025 16:24:27 +0000 (0:00:01.143) 0:10:53.156 ********** 2025-05-31 16:25:43.781398 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.781404 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.781409 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.781415 | orchestrator | 2025-05-31 16:25:43.781420 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-31 16:25:43.781426 | orchestrator | Saturday 31 May 2025 16:24:27 +0000 (0:00:00.329) 0:10:53.486 ********** 2025-05-31 16:25:43.781431 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:25:43.781436 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:25:43.781442 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:25:43.781447 | orchestrator | 2025-05-31 16:25:43.781452 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-05-31 16:25:43.781458 | orchestrator | 2025-05-31 16:25:43.781463 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-31 16:25:43.781468 | orchestrator | Saturday 31 May 2025 16:24:29 +0000 (0:00:01.955) 0:10:55.442 ********** 2025-05-31 16:25:43.781474 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:25:43.781479 | orchestrator | 2025-05-31 16:25:43.781484 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-31 16:25:43.781490 | orchestrator | Saturday 31 May 2025 16:24:30 +0000 (0:00:00.721) 0:10:56.164 ********** 2025-05-31 16:25:43.781495 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.781501 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.781506 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.781511 | orchestrator | 2025-05-31 16:25:43.781516 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-31 16:25:43.781522 | orchestrator | Saturday 31 May 2025 16:24:30 +0000 (0:00:00.317) 0:10:56.481 ********** 2025-05-31 16:25:43.781527 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.781532 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.781538 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.781543 | orchestrator | 2025-05-31 16:25:43.781548 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-31 16:25:43.781554 | orchestrator | Saturday 31 May 2025 16:24:31 +0000 (0:00:00.716) 0:10:57.198 ********** 2025-05-31 16:25:43.781559 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.781564 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.781572 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.781581 | orchestrator | 2025-05-31 16:25:43.781589 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-31 16:25:43.781598 | orchestrator | Saturday 31 May 2025 16:24:32 +0000 (0:00:01.050) 0:10:58.248 ********** 2025-05-31 16:25:43.781616 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.781626 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.781633 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.781642 | orchestrator | 2025-05-31 16:25:43.781651 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-31 16:25:43.781665 | orchestrator | Saturday 31 May 2025 16:24:33 +0000 (0:00:00.711) 0:10:58.960 ********** 2025-05-31 16:25:43.781673 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.781679 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.781684 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.781690 | orchestrator | 2025-05-31 16:25:43.781695 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-31 16:25:43.781701 | orchestrator | Saturday 31 May 2025 16:24:33 +0000 (0:00:00.319) 0:10:59.280 ********** 2025-05-31 16:25:43.781706 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.781711 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.781717 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.781722 | orchestrator | 2025-05-31 16:25:43.781727 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-31 16:25:43.781733 | orchestrator | Saturday 31 May 2025 16:24:34 +0000 (0:00:00.323) 0:10:59.604 ********** 2025-05-31 16:25:43.781738 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.781744 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.781749 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.781754 | orchestrator | 2025-05-31 16:25:43.781760 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-31 16:25:43.781765 | orchestrator | Saturday 31 May 2025 16:24:34 +0000 (0:00:00.547) 0:11:00.152 ********** 2025-05-31 16:25:43.781770 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.781776 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.781781 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.781787 | orchestrator | 2025-05-31 16:25:43.781792 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-31 16:25:43.781802 | orchestrator | Saturday 31 May 2025 16:24:34 +0000 (0:00:00.317) 0:11:00.470 ********** 2025-05-31 16:25:43.781807 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.781813 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.781818 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.781823 | orchestrator | 2025-05-31 16:25:43.781829 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-31 16:25:43.781834 | orchestrator | Saturday 31 May 2025 16:24:35 +0000 (0:00:00.346) 0:11:00.816 ********** 2025-05-31 16:25:43.781839 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.781845 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.781851 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.781856 | orchestrator | 2025-05-31 16:25:43.781861 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-31 16:25:43.781867 | orchestrator | Saturday 31 May 2025 16:24:35 +0000 (0:00:00.297) 0:11:01.114 ********** 2025-05-31 16:25:43.781872 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.781894 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.781900 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.781906 | orchestrator | 2025-05-31 16:25:43.781911 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-31 16:25:43.781917 | orchestrator | Saturday 31 May 2025 16:24:36 +0000 (0:00:00.995) 0:11:02.109 ********** 2025-05-31 16:25:43.781922 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.781928 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.781933 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.781938 | orchestrator | 2025-05-31 16:25:43.781944 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-31 16:25:43.781949 | orchestrator | Saturday 31 May 2025 16:24:36 +0000 (0:00:00.306) 0:11:02.415 ********** 2025-05-31 16:25:43.781954 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.781964 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.781970 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.781975 | orchestrator | 2025-05-31 16:25:43.781981 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-31 16:25:43.781986 | orchestrator | Saturday 31 May 2025 16:24:37 +0000 (0:00:00.290) 0:11:02.706 ********** 2025-05-31 16:25:43.781994 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.782003 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.782012 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.782062 | orchestrator | 2025-05-31 16:25:43.782072 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-31 16:25:43.782081 | orchestrator | Saturday 31 May 2025 16:24:37 +0000 (0:00:00.326) 0:11:03.033 ********** 2025-05-31 16:25:43.782090 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.782098 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.782103 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.782108 | orchestrator | 2025-05-31 16:25:43.782114 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-31 16:25:43.782119 | orchestrator | Saturday 31 May 2025 16:24:38 +0000 (0:00:00.612) 0:11:03.645 ********** 2025-05-31 16:25:43.782125 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.782130 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.782135 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.782141 | orchestrator | 2025-05-31 16:25:43.782146 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-31 16:25:43.782152 | orchestrator | Saturday 31 May 2025 16:24:38 +0000 (0:00:00.316) 0:11:03.961 ********** 2025-05-31 16:25:43.782157 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.782163 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.782168 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.782174 | orchestrator | 2025-05-31 16:25:43.782179 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-31 16:25:43.782184 | orchestrator | Saturday 31 May 2025 16:24:38 +0000 (0:00:00.334) 0:11:04.296 ********** 2025-05-31 16:25:43.782189 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.782195 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.782200 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.782206 | orchestrator | 2025-05-31 16:25:43.782211 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-31 16:25:43.782216 | orchestrator | Saturday 31 May 2025 16:24:39 +0000 (0:00:00.314) 0:11:04.610 ********** 2025-05-31 16:25:43.782221 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.782227 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.782232 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.782238 | orchestrator | 2025-05-31 16:25:43.782243 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-31 16:25:43.782263 | orchestrator | Saturday 31 May 2025 16:24:39 +0000 (0:00:00.586) 0:11:05.197 ********** 2025-05-31 16:25:43.782269 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.782274 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.782279 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.782285 | orchestrator | 2025-05-31 16:25:43.782290 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-31 16:25:43.782295 | orchestrator | Saturday 31 May 2025 16:24:40 +0000 (0:00:00.336) 0:11:05.533 ********** 2025-05-31 16:25:43.782301 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.782306 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.782311 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.782317 | orchestrator | 2025-05-31 16:25:43.782322 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-31 16:25:43.782328 | orchestrator | Saturday 31 May 2025 16:24:40 +0000 (0:00:00.315) 0:11:05.849 ********** 2025-05-31 16:25:43.782333 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.782338 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.782344 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.782354 | orchestrator | 2025-05-31 16:25:43.782359 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-31 16:25:43.782365 | orchestrator | Saturday 31 May 2025 16:24:40 +0000 (0:00:00.314) 0:11:06.163 ********** 2025-05-31 16:25:43.782370 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.782375 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.782380 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.782386 | orchestrator | 2025-05-31 16:25:43.782391 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-31 16:25:43.782401 | orchestrator | Saturday 31 May 2025 16:24:41 +0000 (0:00:00.586) 0:11:06.750 ********** 2025-05-31 16:25:43.782407 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.782412 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.782418 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.782423 | orchestrator | 2025-05-31 16:25:43.782429 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-31 16:25:43.782434 | orchestrator | Saturday 31 May 2025 16:24:41 +0000 (0:00:00.371) 0:11:07.121 ********** 2025-05-31 16:25:43.782439 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.782444 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.782450 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.782455 | orchestrator | 2025-05-31 16:25:43.782460 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-31 16:25:43.782466 | orchestrator | Saturday 31 May 2025 16:24:41 +0000 (0:00:00.335) 0:11:07.456 ********** 2025-05-31 16:25:43.782471 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.782476 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.782482 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.782488 | orchestrator | 2025-05-31 16:25:43.782493 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-31 16:25:43.782498 | orchestrator | Saturday 31 May 2025 16:24:42 +0000 (0:00:00.351) 0:11:07.808 ********** 2025-05-31 16:25:43.782503 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.782509 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.782514 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.782519 | orchestrator | 2025-05-31 16:25:43.782525 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-31 16:25:43.782530 | orchestrator | Saturday 31 May 2025 16:24:42 +0000 (0:00:00.634) 0:11:08.442 ********** 2025-05-31 16:25:43.782535 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.782541 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.782546 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.782551 | orchestrator | 2025-05-31 16:25:43.782556 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-31 16:25:43.782562 | orchestrator | Saturday 31 May 2025 16:24:43 +0000 (0:00:00.339) 0:11:08.782 ********** 2025-05-31 16:25:43.782567 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.782572 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.782578 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.782583 | orchestrator | 2025-05-31 16:25:43.782588 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-31 16:25:43.782594 | orchestrator | Saturday 31 May 2025 16:24:43 +0000 (0:00:00.322) 0:11:09.104 ********** 2025-05-31 16:25:43.782599 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.782604 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.782609 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.782615 | orchestrator | 2025-05-31 16:25:43.782620 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-31 16:25:43.782625 | orchestrator | Saturday 31 May 2025 16:24:43 +0000 (0:00:00.379) 0:11:09.483 ********** 2025-05-31 16:25:43.782631 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.782636 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.782645 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.782651 | orchestrator | 2025-05-31 16:25:43.782656 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-31 16:25:43.782661 | orchestrator | Saturday 31 May 2025 16:24:44 +0000 (0:00:00.696) 0:11:10.180 ********** 2025-05-31 16:25:43.782666 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.782672 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.782677 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.782682 | orchestrator | 2025-05-31 16:25:43.782688 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-31 16:25:43.782693 | orchestrator | Saturday 31 May 2025 16:24:45 +0000 (0:00:00.333) 0:11:10.513 ********** 2025-05-31 16:25:43.782698 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-31 16:25:43.782704 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-31 16:25:43.782709 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.782714 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-31 16:25:43.782719 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-31 16:25:43.782725 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.782730 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-31 16:25:43.782738 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-31 16:25:43.782743 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.782749 | orchestrator | 2025-05-31 16:25:43.782754 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-31 16:25:43.782759 | orchestrator | Saturday 31 May 2025 16:24:45 +0000 (0:00:00.365) 0:11:10.879 ********** 2025-05-31 16:25:43.782764 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-31 16:25:43.782769 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-31 16:25:43.782775 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.782780 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-31 16:25:43.782785 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-31 16:25:43.782791 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.782796 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-31 16:25:43.782801 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-31 16:25:43.782806 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.782812 | orchestrator | 2025-05-31 16:25:43.782817 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-31 16:25:43.782822 | orchestrator | Saturday 31 May 2025 16:24:45 +0000 (0:00:00.341) 0:11:11.220 ********** 2025-05-31 16:25:43.782827 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.782833 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.782838 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.782843 | orchestrator | 2025-05-31 16:25:43.782849 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-31 16:25:43.782857 | orchestrator | Saturday 31 May 2025 16:24:46 +0000 (0:00:00.592) 0:11:11.813 ********** 2025-05-31 16:25:43.782863 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.782868 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.782874 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.782909 | orchestrator | 2025-05-31 16:25:43.782916 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-31 16:25:43.782921 | orchestrator | Saturday 31 May 2025 16:24:46 +0000 (0:00:00.324) 0:11:12.137 ********** 2025-05-31 16:25:43.782927 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.782932 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.782937 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.782943 | orchestrator | 2025-05-31 16:25:43.782948 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-31 16:25:43.782953 | orchestrator | Saturday 31 May 2025 16:24:46 +0000 (0:00:00.310) 0:11:12.448 ********** 2025-05-31 16:25:43.782963 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.782968 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.782974 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.782979 | orchestrator | 2025-05-31 16:25:43.782984 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-31 16:25:43.782989 | orchestrator | Saturday 31 May 2025 16:24:47 +0000 (0:00:00.328) 0:11:12.776 ********** 2025-05-31 16:25:43.782995 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.783000 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.783005 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.783011 | orchestrator | 2025-05-31 16:25:43.783016 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-31 16:25:43.783022 | orchestrator | Saturday 31 May 2025 16:24:47 +0000 (0:00:00.594) 0:11:13.370 ********** 2025-05-31 16:25:43.783027 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.783032 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.783037 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.783042 | orchestrator | 2025-05-31 16:25:43.783048 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-31 16:25:43.783053 | orchestrator | Saturday 31 May 2025 16:24:48 +0000 (0:00:00.337) 0:11:13.707 ********** 2025-05-31 16:25:43.783058 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 16:25:43.783064 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 16:25:43.783069 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 16:25:43.783074 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.783079 | orchestrator | 2025-05-31 16:25:43.783085 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-31 16:25:43.783090 | orchestrator | Saturday 31 May 2025 16:24:48 +0000 (0:00:00.413) 0:11:14.121 ********** 2025-05-31 16:25:43.783095 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 16:25:43.783101 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 16:25:43.783106 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 16:25:43.783111 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.783116 | orchestrator | 2025-05-31 16:25:43.783122 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-31 16:25:43.783127 | orchestrator | Saturday 31 May 2025 16:24:49 +0000 (0:00:00.417) 0:11:14.539 ********** 2025-05-31 16:25:43.783132 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 16:25:43.783138 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 16:25:43.783143 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 16:25:43.783149 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.783153 | orchestrator | 2025-05-31 16:25:43.783158 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-31 16:25:43.783163 | orchestrator | Saturday 31 May 2025 16:24:49 +0000 (0:00:00.415) 0:11:14.954 ********** 2025-05-31 16:25:43.783167 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.783172 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.783177 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.783181 | orchestrator | 2025-05-31 16:25:43.783186 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-31 16:25:43.783191 | orchestrator | Saturday 31 May 2025 16:24:49 +0000 (0:00:00.313) 0:11:15.268 ********** 2025-05-31 16:25:43.783199 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-31 16:25:43.783204 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.783208 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-31 16:25:43.783213 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.783218 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-31 16:25:43.783222 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.783241 | orchestrator | 2025-05-31 16:25:43.783246 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-31 16:25:43.783251 | orchestrator | Saturday 31 May 2025 16:24:50 +0000 (0:00:00.816) 0:11:16.084 ********** 2025-05-31 16:25:43.783255 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.783260 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.783265 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.783269 | orchestrator | 2025-05-31 16:25:43.783274 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-31 16:25:43.783279 | orchestrator | Saturday 31 May 2025 16:24:50 +0000 (0:00:00.340) 0:11:16.425 ********** 2025-05-31 16:25:43.783284 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.783288 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.783293 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.783298 | orchestrator | 2025-05-31 16:25:43.783302 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-31 16:25:43.783307 | orchestrator | Saturday 31 May 2025 16:24:51 +0000 (0:00:00.328) 0:11:16.754 ********** 2025-05-31 16:25:43.783312 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-31 16:25:43.783317 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.783322 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-31 16:25:43.783329 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.783334 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-31 16:25:43.783339 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.783344 | orchestrator | 2025-05-31 16:25:43.783349 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-31 16:25:43.783353 | orchestrator | Saturday 31 May 2025 16:24:51 +0000 (0:00:00.442) 0:11:17.196 ********** 2025-05-31 16:25:43.783358 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-31 16:25:43.783363 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.783368 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-31 16:25:43.783373 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.783377 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-31 16:25:43.783382 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.783387 | orchestrator | 2025-05-31 16:25:43.783392 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-31 16:25:43.783396 | orchestrator | Saturday 31 May 2025 16:24:52 +0000 (0:00:00.595) 0:11:17.792 ********** 2025-05-31 16:25:43.783401 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 16:25:43.783406 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 16:25:43.783411 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 16:25:43.783416 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-31 16:25:43.783420 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-31 16:25:43.783425 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-31 16:25:43.783430 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.783435 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.783439 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-31 16:25:43.783444 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-31 16:25:43.783449 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-31 16:25:43.783453 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.783458 | orchestrator | 2025-05-31 16:25:43.783463 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-31 16:25:43.783468 | orchestrator | Saturday 31 May 2025 16:24:52 +0000 (0:00:00.615) 0:11:18.407 ********** 2025-05-31 16:25:43.783476 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.783481 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.783485 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.783490 | orchestrator | 2025-05-31 16:25:43.783495 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-31 16:25:43.783500 | orchestrator | Saturday 31 May 2025 16:24:53 +0000 (0:00:00.810) 0:11:19.218 ********** 2025-05-31 16:25:43.783505 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-31 16:25:43.783509 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.783514 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-31 16:25:43.783519 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.783524 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-31 16:25:43.783528 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.783533 | orchestrator | 2025-05-31 16:25:43.783538 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-31 16:25:43.783543 | orchestrator | Saturday 31 May 2025 16:24:54 +0000 (0:00:00.558) 0:11:19.777 ********** 2025-05-31 16:25:43.783547 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.783552 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.783557 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.783562 | orchestrator | 2025-05-31 16:25:43.783566 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-31 16:25:43.783571 | orchestrator | Saturday 31 May 2025 16:24:55 +0000 (0:00:00.779) 0:11:20.556 ********** 2025-05-31 16:25:43.783576 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.783581 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.783585 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.783590 | orchestrator | 2025-05-31 16:25:43.783598 | orchestrator | TASK [ceph-rgw : include common.yml] ******************************************* 2025-05-31 16:25:43.783603 | orchestrator | Saturday 31 May 2025 16:24:55 +0000 (0:00:00.527) 0:11:21.084 ********** 2025-05-31 16:25:43.783607 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:25:43.783612 | orchestrator | 2025-05-31 16:25:43.783617 | orchestrator | TASK [ceph-rgw : create rados gateway directories] ***************************** 2025-05-31 16:25:43.783621 | orchestrator | Saturday 31 May 2025 16:24:56 +0000 (0:00:00.760) 0:11:21.844 ********** 2025-05-31 16:25:43.783626 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2025-05-31 16:25:43.783631 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2025-05-31 16:25:43.783636 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2025-05-31 16:25:43.783641 | orchestrator | 2025-05-31 16:25:43.783645 | orchestrator | TASK [ceph-rgw : get keys from monitors] *************************************** 2025-05-31 16:25:43.783650 | orchestrator | Saturday 31 May 2025 16:24:57 +0000 (0:00:00.693) 0:11:22.538 ********** 2025-05-31 16:25:43.783655 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 16:25:43.783660 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-31 16:25:43.783664 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-31 16:25:43.783669 | orchestrator | 2025-05-31 16:25:43.783674 | orchestrator | TASK [ceph-rgw : copy ceph key(s) if needed] *********************************** 2025-05-31 16:25:43.783679 | orchestrator | Saturday 31 May 2025 16:24:58 +0000 (0:00:01.873) 0:11:24.411 ********** 2025-05-31 16:25:43.783686 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-31 16:25:43.783691 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-31 16:25:43.783696 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:25:43.783700 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-31 16:25:43.783705 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-31 16:25:43.783710 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:25:43.783714 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-31 16:25:43.783719 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-31 16:25:43.783727 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:25:43.783732 | orchestrator | 2025-05-31 16:25:43.783737 | orchestrator | TASK [ceph-rgw : copy SSL certificate & key data to certificate path] ********** 2025-05-31 16:25:43.783742 | orchestrator | Saturday 31 May 2025 16:25:00 +0000 (0:00:01.401) 0:11:25.813 ********** 2025-05-31 16:25:43.783746 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.783751 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.783756 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.783760 | orchestrator | 2025-05-31 16:25:43.783765 | orchestrator | TASK [ceph-rgw : include_tasks pre_requisite.yml] ****************************** 2025-05-31 16:25:43.783770 | orchestrator | Saturday 31 May 2025 16:25:00 +0000 (0:00:00.333) 0:11:26.147 ********** 2025-05-31 16:25:43.783775 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.783779 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.783784 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.783789 | orchestrator | 2025-05-31 16:25:43.783794 | orchestrator | TASK [ceph-rgw : rgw pool creation tasks] ************************************** 2025-05-31 16:25:43.783799 | orchestrator | Saturday 31 May 2025 16:25:00 +0000 (0:00:00.329) 0:11:26.476 ********** 2025-05-31 16:25:43.783803 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-05-31 16:25:43.783808 | orchestrator | 2025-05-31 16:25:43.783813 | orchestrator | TASK [ceph-rgw : create ec profile] ******************************************** 2025-05-31 16:25:43.783818 | orchestrator | Saturday 31 May 2025 16:25:01 +0000 (0:00:00.237) 0:11:26.714 ********** 2025-05-31 16:25:43.783823 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-31 16:25:43.783828 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-31 16:25:43.783833 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-31 16:25:43.783838 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-31 16:25:43.783842 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-31 16:25:43.783847 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.783852 | orchestrator | 2025-05-31 16:25:43.783857 | orchestrator | TASK [ceph-rgw : set crush rule] *********************************************** 2025-05-31 16:25:43.783861 | orchestrator | Saturday 31 May 2025 16:25:02 +0000 (0:00:00.885) 0:11:27.599 ********** 2025-05-31 16:25:43.783866 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-31 16:25:43.783871 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-31 16:25:43.783876 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-31 16:25:43.783890 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-31 16:25:43.783896 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-31 16:25:43.783903 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.783908 | orchestrator | 2025-05-31 16:25:43.783913 | orchestrator | TASK [ceph-rgw : create ec pools for rgw] ************************************** 2025-05-31 16:25:43.783917 | orchestrator | Saturday 31 May 2025 16:25:02 +0000 (0:00:00.853) 0:11:28.453 ********** 2025-05-31 16:25:43.783922 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-31 16:25:43.783931 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-31 16:25:43.783936 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-31 16:25:43.783941 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-31 16:25:43.783946 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-31 16:25:43.783951 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.783955 | orchestrator | 2025-05-31 16:25:43.783960 | orchestrator | TASK [ceph-rgw : create replicated pools for rgw] ****************************** 2025-05-31 16:25:43.783965 | orchestrator | Saturday 31 May 2025 16:25:03 +0000 (0:00:00.609) 0:11:29.063 ********** 2025-05-31 16:25:43.783972 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-31 16:25:43.783978 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-31 16:25:43.783983 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-31 16:25:43.783988 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-31 16:25:43.783993 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-31 16:25:43.783997 | orchestrator | 2025-05-31 16:25:43.784002 | orchestrator | TASK [ceph-rgw : include_tasks openstack-keystone.yml] ************************* 2025-05-31 16:25:43.784007 | orchestrator | Saturday 31 May 2025 16:25:28 +0000 (0:00:24.798) 0:11:53.861 ********** 2025-05-31 16:25:43.784012 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.784017 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.784021 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.784026 | orchestrator | 2025-05-31 16:25:43.784031 | orchestrator | TASK [ceph-rgw : include_tasks start_radosgw.yml] ****************************** 2025-05-31 16:25:43.784036 | orchestrator | Saturday 31 May 2025 16:25:28 +0000 (0:00:00.373) 0:11:54.235 ********** 2025-05-31 16:25:43.784040 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.784045 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.784050 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.784054 | orchestrator | 2025-05-31 16:25:43.784059 | orchestrator | TASK [ceph-rgw : include start_docker_rgw.yml] ********************************* 2025-05-31 16:25:43.784064 | orchestrator | Saturday 31 May 2025 16:25:29 +0000 (0:00:00.287) 0:11:54.522 ********** 2025-05-31 16:25:43.784069 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:25:43.784074 | orchestrator | 2025-05-31 16:25:43.784079 | orchestrator | TASK [ceph-rgw : include_task systemd.yml] ************************************* 2025-05-31 16:25:43.784083 | orchestrator | Saturday 31 May 2025 16:25:29 +0000 (0:00:00.461) 0:11:54.984 ********** 2025-05-31 16:25:43.784088 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:25:43.784093 | orchestrator | 2025-05-31 16:25:43.784098 | orchestrator | TASK [ceph-rgw : generate systemd unit file] *********************************** 2025-05-31 16:25:43.784102 | orchestrator | Saturday 31 May 2025 16:25:30 +0000 (0:00:00.588) 0:11:55.572 ********** 2025-05-31 16:25:43.784107 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:25:43.784112 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:25:43.784117 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:25:43.784127 | orchestrator | 2025-05-31 16:25:43.784131 | orchestrator | TASK [ceph-rgw : generate systemd ceph-radosgw target file] ******************** 2025-05-31 16:25:43.784136 | orchestrator | Saturday 31 May 2025 16:25:31 +0000 (0:00:01.239) 0:11:56.812 ********** 2025-05-31 16:25:43.784141 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:25:43.784146 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:25:43.784150 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:25:43.784155 | orchestrator | 2025-05-31 16:25:43.784160 | orchestrator | TASK [ceph-rgw : enable ceph-radosgw.target] *********************************** 2025-05-31 16:25:43.784165 | orchestrator | Saturday 31 May 2025 16:25:32 +0000 (0:00:01.197) 0:11:58.010 ********** 2025-05-31 16:25:43.784170 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:25:43.784174 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:25:43.784179 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:25:43.784184 | orchestrator | 2025-05-31 16:25:43.784188 | orchestrator | TASK [ceph-rgw : systemd start rgw container] ********************************** 2025-05-31 16:25:43.784193 | orchestrator | Saturday 31 May 2025 16:25:34 +0000 (0:00:01.872) 0:11:59.882 ********** 2025-05-31 16:25:43.784209 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-31 16:25:43.784215 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-31 16:25:43.784219 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-31 16:25:43.784224 | orchestrator | 2025-05-31 16:25:43.784229 | orchestrator | TASK [ceph-rgw : include_tasks multisite/main.yml] ***************************** 2025-05-31 16:25:43.784234 | orchestrator | Saturday 31 May 2025 16:25:36 +0000 (0:00:02.001) 0:12:01.884 ********** 2025-05-31 16:25:43.784238 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.784243 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:25:43.784248 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:25:43.784253 | orchestrator | 2025-05-31 16:25:43.784257 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-31 16:25:43.784262 | orchestrator | Saturday 31 May 2025 16:25:37 +0000 (0:00:01.135) 0:12:03.020 ********** 2025-05-31 16:25:43.784267 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:25:43.784272 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:25:43.784277 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:25:43.784281 | orchestrator | 2025-05-31 16:25:43.784286 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-05-31 16:25:43.784291 | orchestrator | Saturday 31 May 2025 16:25:38 +0000 (0:00:00.690) 0:12:03.710 ********** 2025-05-31 16:25:43.784298 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:25:43.784303 | orchestrator | 2025-05-31 16:25:43.784308 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-05-31 16:25:43.784313 | orchestrator | Saturday 31 May 2025 16:25:38 +0000 (0:00:00.721) 0:12:04.432 ********** 2025-05-31 16:25:43.784317 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.784322 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.784327 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.784332 | orchestrator | 2025-05-31 16:25:43.784337 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-05-31 16:25:43.784341 | orchestrator | Saturday 31 May 2025 16:25:39 +0000 (0:00:00.318) 0:12:04.751 ********** 2025-05-31 16:25:43.784346 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:25:43.784351 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:25:43.784355 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:25:43.784360 | orchestrator | 2025-05-31 16:25:43.784365 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-05-31 16:25:43.784369 | orchestrator | Saturday 31 May 2025 16:25:40 +0000 (0:00:01.203) 0:12:05.954 ********** 2025-05-31 16:25:43.784378 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 16:25:43.784383 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 16:25:43.784387 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 16:25:43.784392 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:25:43.784397 | orchestrator | 2025-05-31 16:25:43.784401 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-05-31 16:25:43.784406 | orchestrator | Saturday 31 May 2025 16:25:41 +0000 (0:00:00.932) 0:12:06.887 ********** 2025-05-31 16:25:43.784411 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:25:43.784416 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:25:43.784420 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:25:43.784425 | orchestrator | 2025-05-31 16:25:43.784430 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-31 16:25:43.784435 | orchestrator | Saturday 31 May 2025 16:25:41 +0000 (0:00:00.354) 0:12:07.242 ********** 2025-05-31 16:25:43.784440 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:25:43.784444 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:25:43.784449 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:25:43.784454 | orchestrator | 2025-05-31 16:25:43.784459 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:25:43.784464 | orchestrator | testbed-node-0 : ok=131  changed=38  unreachable=0 failed=0 skipped=291  rescued=0 ignored=0 2025-05-31 16:25:43.784469 | orchestrator | testbed-node-1 : ok=119  changed=34  unreachable=0 failed=0 skipped=262  rescued=0 ignored=0 2025-05-31 16:25:43.784474 | orchestrator | testbed-node-2 : ok=126  changed=36  unreachable=0 failed=0 skipped=261  rescued=0 ignored=0 2025-05-31 16:25:43.784478 | orchestrator | testbed-node-3 : ok=175  changed=47  unreachable=0 failed=0 skipped=347  rescued=0 ignored=0 2025-05-31 16:25:43.784483 | orchestrator | testbed-node-4 : ok=164  changed=43  unreachable=0 failed=0 skipped=309  rescued=0 ignored=0 2025-05-31 16:25:43.784488 | orchestrator | testbed-node-5 : ok=166  changed=44  unreachable=0 failed=0 skipped=307  rescued=0 ignored=0 2025-05-31 16:25:43.784493 | orchestrator | 2025-05-31 16:25:43.784498 | orchestrator | 2025-05-31 16:25:43.784502 | orchestrator | 2025-05-31 16:25:43.784507 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 16:25:43.784512 | orchestrator | Saturday 31 May 2025 16:25:43 +0000 (0:00:01.304) 0:12:08.547 ********** 2025-05-31 16:25:43.784516 | orchestrator | =============================================================================== 2025-05-31 16:25:43.784521 | orchestrator | ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:17.2.7 image -- 40.44s 2025-05-31 16:25:43.784528 | orchestrator | ceph-osd : use ceph-volume to create bluestore osds -------------------- 38.94s 2025-05-31 16:25:43.784533 | orchestrator | ceph-rgw : create replicated pools for rgw ----------------------------- 24.80s 2025-05-31 16:25:43.784538 | orchestrator | ceph-mon : waiting for the monitor(s) to form the quorum... ------------ 21.49s 2025-05-31 16:25:43.784543 | orchestrator | ceph-mds : wait for mds socket to exist -------------------------------- 17.03s 2025-05-31 16:25:43.784547 | orchestrator | ceph-mgr : wait for all mgr to be up ----------------------------------- 13.33s 2025-05-31 16:25:43.784552 | orchestrator | ceph-osd : wait for all osd to be up ----------------------------------- 12.73s 2025-05-31 16:25:43.784557 | orchestrator | ceph-mgr : create ceph mgr keyring(s) on a mon node --------------------- 7.69s 2025-05-31 16:25:43.784561 | orchestrator | ceph-mon : fetch ceph initial keys -------------------------------------- 7.50s 2025-05-31 16:25:43.784566 | orchestrator | ceph-mds : create filesystem pools -------------------------------------- 6.84s 2025-05-31 16:25:43.784571 | orchestrator | ceph-mgr : disable ceph mgr enabled modules ----------------------------- 6.34s 2025-05-31 16:25:43.784579 | orchestrator | ceph-config : create ceph initial directories --------------------------- 6.19s 2025-05-31 16:25:43.784584 | orchestrator | ceph-mgr : add modules to ceph-mgr -------------------------------------- 4.89s 2025-05-31 16:25:43.784588 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 4.33s 2025-05-31 16:25:43.784593 | orchestrator | ceph-osd : systemd start osd -------------------------------------------- 3.95s 2025-05-31 16:25:43.784601 | orchestrator | ceph-crash : start the ceph-crash service ------------------------------- 3.94s 2025-05-31 16:25:43.784605 | orchestrator | ceph-crash : create client.crash keyring -------------------------------- 3.66s 2025-05-31 16:25:43.784610 | orchestrator | ceph-handler : remove tempdir for scripts ------------------------------- 3.55s 2025-05-31 16:25:43.784615 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 3.53s 2025-05-31 16:25:43.784620 | orchestrator | ceph-config : generate ceph.conf configuration file --------------------- 3.31s 2025-05-31 16:25:43.784624 | orchestrator | 2025-05-31 16:25:43 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:25:43.784629 | orchestrator | 2025-05-31 16:25:43 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:25:46.788289 | orchestrator | 2025-05-31 16:25:46 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:25:46.788708 | orchestrator | 2025-05-31 16:25:46 | INFO  | Task e54ad9f8-aa2f-404b-9324-844b63402d9b is in state STARTED 2025-05-31 16:25:46.790583 | orchestrator | 2025-05-31 16:25:46 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:25:46.790616 | orchestrator | 2025-05-31 16:25:46 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:25:49.841672 | orchestrator | 2025-05-31 16:25:49 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:25:49.841774 | orchestrator | 2025-05-31 16:25:49 | INFO  | Task e54ad9f8-aa2f-404b-9324-844b63402d9b is in state STARTED 2025-05-31 16:25:49.843121 | orchestrator | 2025-05-31 16:25:49 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:25:49.843150 | orchestrator | 2025-05-31 16:25:49 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:25:52.884211 | orchestrator | 2025-05-31 16:25:52 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:25:52.887117 | orchestrator | 2025-05-31 16:25:52 | INFO  | Task e54ad9f8-aa2f-404b-9324-844b63402d9b is in state STARTED 2025-05-31 16:25:52.888504 | orchestrator | 2025-05-31 16:25:52 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state STARTED 2025-05-31 16:25:52.889039 | orchestrator | 2025-05-31 16:25:52 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:25:55.954121 | orchestrator | 2025-05-31 16:25:55 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:25:55.955677 | orchestrator | 2025-05-31 16:25:55 | INFO  | Task e54ad9f8-aa2f-404b-9324-844b63402d9b is in state STARTED 2025-05-31 16:25:55.958941 | orchestrator | 2025-05-31 16:25:55 | INFO  | Task a42aafd6-fa29-4709-a2aa-bad794b54fdd is in state SUCCESS 2025-05-31 16:25:55.960859 | orchestrator | 2025-05-31 16:25:55.960926 | orchestrator | 2025-05-31 16:25:55.960939 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-05-31 16:25:55.960952 | orchestrator | 2025-05-31 16:25:55.960963 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-05-31 16:25:55.960990 | orchestrator | Saturday 31 May 2025 16:22:34 +0000 (0:00:00.157) 0:00:00.157 ********** 2025-05-31 16:25:55.961002 | orchestrator | ok: [localhost] => { 2025-05-31 16:25:55.961015 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-05-31 16:25:55.961055 | orchestrator | } 2025-05-31 16:25:55.961068 | orchestrator | 2025-05-31 16:25:55.961101 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-05-31 16:25:55.961113 | orchestrator | Saturday 31 May 2025 16:22:34 +0000 (0:00:00.044) 0:00:00.201 ********** 2025-05-31 16:25:55.961137 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-05-31 16:25:55.961149 | orchestrator | ...ignoring 2025-05-31 16:25:55.961160 | orchestrator | 2025-05-31 16:25:55.961171 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-05-31 16:25:55.961182 | orchestrator | Saturday 31 May 2025 16:22:37 +0000 (0:00:02.533) 0:00:02.735 ********** 2025-05-31 16:25:55.961192 | orchestrator | skipping: [localhost] 2025-05-31 16:25:55.961203 | orchestrator | 2025-05-31 16:25:55.961213 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-05-31 16:25:55.961224 | orchestrator | Saturday 31 May 2025 16:22:37 +0000 (0:00:00.088) 0:00:02.823 ********** 2025-05-31 16:25:55.961235 | orchestrator | ok: [localhost] 2025-05-31 16:25:55.961246 | orchestrator | 2025-05-31 16:25:55.961256 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-31 16:25:55.961267 | orchestrator | 2025-05-31 16:25:55.961277 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-31 16:25:55.961288 | orchestrator | Saturday 31 May 2025 16:22:37 +0000 (0:00:00.261) 0:00:03.084 ********** 2025-05-31 16:25:55.961305 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:55.961323 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:55.961341 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:55.961360 | orchestrator | 2025-05-31 16:25:55.961379 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-31 16:25:55.961394 | orchestrator | Saturday 31 May 2025 16:22:37 +0000 (0:00:00.381) 0:00:03.466 ********** 2025-05-31 16:25:55.961405 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-05-31 16:25:55.961416 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-05-31 16:25:55.961427 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-05-31 16:25:55.961437 | orchestrator | 2025-05-31 16:25:55.961448 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-05-31 16:25:55.961458 | orchestrator | 2025-05-31 16:25:55.961469 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-05-31 16:25:55.961482 | orchestrator | Saturday 31 May 2025 16:22:38 +0000 (0:00:00.395) 0:00:03.861 ********** 2025-05-31 16:25:55.961494 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-31 16:25:55.961506 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-31 16:25:55.961518 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-31 16:25:55.961530 | orchestrator | 2025-05-31 16:25:55.961542 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-31 16:25:55.961554 | orchestrator | Saturday 31 May 2025 16:22:38 +0000 (0:00:00.602) 0:00:04.464 ********** 2025-05-31 16:25:55.961566 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:25:55.961580 | orchestrator | 2025-05-31 16:25:55.961592 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-05-31 16:25:55.961603 | orchestrator | Saturday 31 May 2025 16:22:39 +0000 (0:00:00.649) 0:00:05.113 ********** 2025-05-31 16:25:55.961638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-31 16:25:55.961726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-31 16:25:55.961749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-31 16:25:55.961770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-31 16:25:55.961802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-31 16:25:55.961815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-31 16:25:55.961826 | orchestrator | 2025-05-31 16:25:55.961837 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-05-31 16:25:55.961848 | orchestrator | Saturday 31 May 2025 16:22:43 +0000 (0:00:04.097) 0:00:09.211 ********** 2025-05-31 16:25:55.961859 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:55.961871 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:55.961908 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:55.961919 | orchestrator | 2025-05-31 16:25:55.961930 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-05-31 16:25:55.961941 | orchestrator | Saturday 31 May 2025 16:22:44 +0000 (0:00:00.711) 0:00:09.922 ********** 2025-05-31 16:25:55.961951 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:55.961962 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:55.961972 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:55.961983 | orchestrator | 2025-05-31 16:25:55.961994 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-05-31 16:25:55.962004 | orchestrator | Saturday 31 May 2025 16:22:45 +0000 (0:00:01.506) 0:00:11.428 ********** 2025-05-31 16:25:55.962075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-31 16:25:55.962107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-31 16:25:55.962120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-31 16:25:55.962163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-31 16:25:55.962181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-31 16:25:55.962193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-31 16:25:55.962205 | orchestrator | 2025-05-31 16:25:55.962215 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-05-31 16:25:55.962227 | orchestrator | Saturday 31 May 2025 16:22:51 +0000 (0:00:05.386) 0:00:16.814 ********** 2025-05-31 16:25:55.962237 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:55.962248 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:55.962258 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:55.962269 | orchestrator | 2025-05-31 16:25:55.962280 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-05-31 16:25:55.962290 | orchestrator | Saturday 31 May 2025 16:22:52 +0000 (0:00:01.053) 0:00:17.868 ********** 2025-05-31 16:25:55.962323 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:25:55.962335 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:55.962538 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:25:55.962558 | orchestrator | 2025-05-31 16:25:55.962569 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-05-31 16:25:55.962580 | orchestrator | Saturday 31 May 2025 16:22:59 +0000 (0:00:06.883) 0:00:24.751 ********** 2025-05-31 16:25:55.962603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-31 16:25:55.962624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-31 16:25:55.962647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-31 16:25:55.962667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-31 16:25:55.962684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-31 16:25:55.962696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-31 16:25:55.962707 | orchestrator | 2025-05-31 16:25:55.962718 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-05-31 16:25:55.962743 | orchestrator | Saturday 31 May 2025 16:23:04 +0000 (0:00:04.970) 0:00:29.721 ********** 2025-05-31 16:25:55.962754 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:25:55.962765 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:55.962775 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:25:55.962786 | orchestrator | 2025-05-31 16:25:55.962797 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-05-31 16:25:55.962808 | orchestrator | Saturday 31 May 2025 16:23:05 +0000 (0:00:00.970) 0:00:30.692 ********** 2025-05-31 16:25:55.962819 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:55.962829 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:55.962840 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:55.962851 | orchestrator | 2025-05-31 16:25:55.962862 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-05-31 16:25:55.962910 | orchestrator | Saturday 31 May 2025 16:23:05 +0000 (0:00:00.376) 0:00:31.069 ********** 2025-05-31 16:25:55.962925 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:55.962936 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:55.962947 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:55.962957 | orchestrator | 2025-05-31 16:25:55.962968 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-05-31 16:25:55.962979 | orchestrator | Saturday 31 May 2025 16:23:05 +0000 (0:00:00.259) 0:00:31.328 ********** 2025-05-31 16:25:55.962991 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-05-31 16:25:55.963003 | orchestrator | ...ignoring 2025-05-31 16:25:55.963013 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-05-31 16:25:55.963024 | orchestrator | ...ignoring 2025-05-31 16:25:55.963035 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-05-31 16:25:55.963046 | orchestrator | ...ignoring 2025-05-31 16:25:55.963056 | orchestrator | 2025-05-31 16:25:55.963067 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-05-31 16:25:55.963078 | orchestrator | Saturday 31 May 2025 16:23:16 +0000 (0:00:10.941) 0:00:42.270 ********** 2025-05-31 16:25:55.963088 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:55.963099 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:55.963109 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:55.963120 | orchestrator | 2025-05-31 16:25:55.963130 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-05-31 16:25:55.963141 | orchestrator | Saturday 31 May 2025 16:23:17 +0000 (0:00:00.562) 0:00:42.833 ********** 2025-05-31 16:25:55.963151 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:55.963162 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:55.963174 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:55.963185 | orchestrator | 2025-05-31 16:25:55.963197 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-05-31 16:25:55.963210 | orchestrator | Saturday 31 May 2025 16:23:17 +0000 (0:00:00.579) 0:00:43.412 ********** 2025-05-31 16:25:55.963222 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:55.963234 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:55.963246 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:55.963258 | orchestrator | 2025-05-31 16:25:55.963276 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-05-31 16:25:55.963289 | orchestrator | Saturday 31 May 2025 16:23:18 +0000 (0:00:00.423) 0:00:43.836 ********** 2025-05-31 16:25:55.963301 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:55.963313 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:55.963325 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:55.963337 | orchestrator | 2025-05-31 16:25:55.963349 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-05-31 16:25:55.963369 | orchestrator | Saturday 31 May 2025 16:23:18 +0000 (0:00:00.571) 0:00:44.407 ********** 2025-05-31 16:25:55.963381 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:55.963393 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:55.963411 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:55.963430 | orchestrator | 2025-05-31 16:25:55.963450 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-05-31 16:25:55.963471 | orchestrator | Saturday 31 May 2025 16:23:19 +0000 (0:00:00.536) 0:00:44.944 ********** 2025-05-31 16:25:55.963498 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:55.963515 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:55.963528 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:55.963538 | orchestrator | 2025-05-31 16:25:55.963549 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-31 16:25:55.963560 | orchestrator | Saturday 31 May 2025 16:23:19 +0000 (0:00:00.492) 0:00:45.436 ********** 2025-05-31 16:25:55.963570 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:55.963581 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:55.963591 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-05-31 16:25:55.963602 | orchestrator | 2025-05-31 16:25:55.963612 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-05-31 16:25:55.963623 | orchestrator | Saturday 31 May 2025 16:23:20 +0000 (0:00:00.485) 0:00:45.922 ********** 2025-05-31 16:25:55.963633 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:55.963644 | orchestrator | 2025-05-31 16:25:55.963654 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-05-31 16:25:55.963665 | orchestrator | Saturday 31 May 2025 16:23:30 +0000 (0:00:10.389) 0:00:56.311 ********** 2025-05-31 16:25:55.963675 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:55.963686 | orchestrator | 2025-05-31 16:25:55.963696 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-31 16:25:55.963707 | orchestrator | Saturday 31 May 2025 16:23:30 +0000 (0:00:00.142) 0:00:56.453 ********** 2025-05-31 16:25:55.963717 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:55.963728 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:55.963738 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:55.963749 | orchestrator | 2025-05-31 16:25:55.963759 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-05-31 16:25:55.963770 | orchestrator | Saturday 31 May 2025 16:23:32 +0000 (0:00:01.159) 0:00:57.613 ********** 2025-05-31 16:25:55.963780 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:55.963791 | orchestrator | 2025-05-31 16:25:55.963801 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-05-31 16:25:55.963812 | orchestrator | Saturday 31 May 2025 16:23:39 +0000 (0:00:07.673) 0:01:05.286 ********** 2025-05-31 16:25:55.963823 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:55.963833 | orchestrator | 2025-05-31 16:25:55.963844 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-05-31 16:25:55.963854 | orchestrator | Saturday 31 May 2025 16:23:41 +0000 (0:00:01.656) 0:01:06.943 ********** 2025-05-31 16:25:55.963865 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:55.963942 | orchestrator | 2025-05-31 16:25:55.963955 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-05-31 16:25:55.963966 | orchestrator | Saturday 31 May 2025 16:23:43 +0000 (0:00:02.439) 0:01:09.382 ********** 2025-05-31 16:25:55.963977 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:55.963987 | orchestrator | 2025-05-31 16:25:55.963998 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-05-31 16:25:55.964009 | orchestrator | Saturday 31 May 2025 16:23:43 +0000 (0:00:00.114) 0:01:09.497 ********** 2025-05-31 16:25:55.964019 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:55.964030 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:55.964040 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:55.964051 | orchestrator | 2025-05-31 16:25:55.964070 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-05-31 16:25:55.964080 | orchestrator | Saturday 31 May 2025 16:23:44 +0000 (0:00:00.462) 0:01:09.960 ********** 2025-05-31 16:25:55.964091 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:55.964101 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:25:55.964112 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:25:55.964122 | orchestrator | 2025-05-31 16:25:55.964133 | orchestrator | RUNNING HANDLER [mariadb : Restart mariadb-clustercheck container] ************* 2025-05-31 16:25:55.964143 | orchestrator | Saturday 31 May 2025 16:23:44 +0000 (0:00:00.409) 0:01:10.369 ********** 2025-05-31 16:25:55.964154 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-05-31 16:25:55.964164 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:25:55.964175 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:55.964185 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:25:55.964196 | orchestrator | 2025-05-31 16:25:55.964206 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-05-31 16:25:55.964217 | orchestrator | skipping: no hosts matched 2025-05-31 16:25:55.964227 | orchestrator | 2025-05-31 16:25:55.964237 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-31 16:25:55.964248 | orchestrator | 2025-05-31 16:25:55.964259 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-31 16:25:55.964269 | orchestrator | Saturday 31 May 2025 16:24:02 +0000 (0:00:17.228) 0:01:27.598 ********** 2025-05-31 16:25:55.964280 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:25:55.964290 | orchestrator | 2025-05-31 16:25:55.964301 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-31 16:25:55.964311 | orchestrator | Saturday 31 May 2025 16:24:18 +0000 (0:00:16.044) 0:01:43.643 ********** 2025-05-31 16:25:55.964329 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:55.964340 | orchestrator | 2025-05-31 16:25:55.964351 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-31 16:25:55.964362 | orchestrator | Saturday 31 May 2025 16:24:38 +0000 (0:00:20.569) 0:02:04.212 ********** 2025-05-31 16:25:55.964372 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:55.964383 | orchestrator | 2025-05-31 16:25:55.964393 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-31 16:25:55.964404 | orchestrator | 2025-05-31 16:25:55.964414 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-31 16:25:55.964424 | orchestrator | Saturday 31 May 2025 16:24:41 +0000 (0:00:02.486) 0:02:06.698 ********** 2025-05-31 16:25:55.964435 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:25:55.964452 | orchestrator | 2025-05-31 16:25:55.964469 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-31 16:25:55.964486 | orchestrator | Saturday 31 May 2025 16:24:56 +0000 (0:00:15.358) 0:02:22.057 ********** 2025-05-31 16:25:55.964504 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:55.964521 | orchestrator | 2025-05-31 16:25:55.964543 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-31 16:25:55.964553 | orchestrator | Saturday 31 May 2025 16:25:17 +0000 (0:00:20.554) 0:02:42.611 ********** 2025-05-31 16:25:55.964563 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:55.964572 | orchestrator | 2025-05-31 16:25:55.964582 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-05-31 16:25:55.964591 | orchestrator | 2025-05-31 16:25:55.964600 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-31 16:25:55.964610 | orchestrator | Saturday 31 May 2025 16:25:19 +0000 (0:00:02.418) 0:02:45.029 ********** 2025-05-31 16:25:55.964619 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:55.964628 | orchestrator | 2025-05-31 16:25:55.964637 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-31 16:25:55.964647 | orchestrator | Saturday 31 May 2025 16:25:35 +0000 (0:00:16.368) 0:03:01.398 ********** 2025-05-31 16:25:55.964656 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:55.964672 | orchestrator | 2025-05-31 16:25:55.964681 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-31 16:25:55.964691 | orchestrator | Saturday 31 May 2025 16:25:36 +0000 (0:00:00.547) 0:03:01.946 ********** 2025-05-31 16:25:55.964700 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:55.964709 | orchestrator | 2025-05-31 16:25:55.964718 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-05-31 16:25:55.964728 | orchestrator | 2025-05-31 16:25:55.964737 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-05-31 16:25:55.964746 | orchestrator | Saturday 31 May 2025 16:25:38 +0000 (0:00:02.488) 0:03:04.434 ********** 2025-05-31 16:25:55.964756 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:25:55.964765 | orchestrator | 2025-05-31 16:25:55.964774 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-05-31 16:25:55.964784 | orchestrator | Saturday 31 May 2025 16:25:39 +0000 (0:00:00.708) 0:03:05.143 ********** 2025-05-31 16:25:55.964793 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:55.964802 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:55.964812 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:55.964821 | orchestrator | 2025-05-31 16:25:55.964831 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-05-31 16:25:55.964840 | orchestrator | Saturday 31 May 2025 16:25:42 +0000 (0:00:02.961) 0:03:08.104 ********** 2025-05-31 16:25:55.964849 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:55.964859 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:55.964868 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:55.964900 | orchestrator | 2025-05-31 16:25:55.964910 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-05-31 16:25:55.964920 | orchestrator | Saturday 31 May 2025 16:25:44 +0000 (0:00:02.228) 0:03:10.333 ********** 2025-05-31 16:25:55.964929 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:55.964938 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:55.964948 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:55.964957 | orchestrator | 2025-05-31 16:25:55.964967 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-05-31 16:25:55.964976 | orchestrator | Saturday 31 May 2025 16:25:47 +0000 (0:00:02.477) 0:03:12.811 ********** 2025-05-31 16:25:55.964985 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:55.964995 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:55.965004 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:25:55.965013 | orchestrator | 2025-05-31 16:25:55.965023 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-05-31 16:25:55.965032 | orchestrator | Saturday 31 May 2025 16:25:49 +0000 (0:00:02.272) 0:03:15.083 ********** 2025-05-31 16:25:55.965042 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:25:55.965051 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:25:55.965060 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:25:55.965070 | orchestrator | 2025-05-31 16:25:55.965079 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-05-31 16:25:55.965089 | orchestrator | Saturday 31 May 2025 16:25:52 +0000 (0:00:03.244) 0:03:18.328 ********** 2025-05-31 16:25:55.965098 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:25:55.965107 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:25:55.965117 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:25:55.965126 | orchestrator | 2025-05-31 16:25:55.965136 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:25:55.965145 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-05-31 16:25:55.965155 | orchestrator | testbed-node-0 : ok=34  changed=17  unreachable=0 failed=0 skipped=8  rescued=0 ignored=1  2025-05-31 16:25:55.965172 | orchestrator | testbed-node-1 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-05-31 16:25:55.965188 | orchestrator | testbed-node-2 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-05-31 16:25:55.965198 | orchestrator | 2025-05-31 16:25:55.965208 | orchestrator | 2025-05-31 16:25:55.965217 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 16:25:55.965227 | orchestrator | Saturday 31 May 2025 16:25:53 +0000 (0:00:00.353) 0:03:18.682 ********** 2025-05-31 16:25:55.965236 | orchestrator | =============================================================================== 2025-05-31 16:25:55.965246 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 41.12s 2025-05-31 16:25:55.965255 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 31.40s 2025-05-31 16:25:55.965264 | orchestrator | mariadb : Restart mariadb-clustercheck container ----------------------- 17.23s 2025-05-31 16:25:55.965278 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 16.37s 2025-05-31 16:25:55.965288 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.94s 2025-05-31 16:25:55.965298 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.39s 2025-05-31 16:25:55.965307 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.67s 2025-05-31 16:25:55.965316 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 6.88s 2025-05-31 16:25:55.965326 | orchestrator | mariadb : Copying over config.json files for services ------------------- 5.39s 2025-05-31 16:25:55.965335 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 4.97s 2025-05-31 16:25:55.965345 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.91s 2025-05-31 16:25:55.965354 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 4.10s 2025-05-31 16:25:55.965364 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.24s 2025-05-31 16:25:55.965373 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.96s 2025-05-31 16:25:55.965383 | orchestrator | Check MariaDB service --------------------------------------------------- 2.53s 2025-05-31 16:25:55.965392 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.49s 2025-05-31 16:25:55.965402 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.48s 2025-05-31 16:25:55.965411 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.44s 2025-05-31 16:25:55.965420 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.27s 2025-05-31 16:25:55.965430 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.23s 2025-05-31 16:25:55.965439 | orchestrator | 2025-05-31 16:25:55 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:25:55.965449 | orchestrator | 2025-05-31 16:25:55 | INFO  | Task 07a501bb-4e4a-4863-9c20-a5218cb71e31 is in state STARTED 2025-05-31 16:25:55.965459 | orchestrator | 2025-05-31 16:25:55 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:25:58.997223 | orchestrator | 2025-05-31 16:25:58 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:25:58.998571 | orchestrator | 2025-05-31 16:25:58 | INFO  | Task e54ad9f8-aa2f-404b-9324-844b63402d9b is in state STARTED 2025-05-31 16:25:58.999723 | orchestrator | 2025-05-31 16:25:58 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:25:59.000357 | orchestrator | 2025-05-31 16:25:58 | INFO  | Task 07a501bb-4e4a-4863-9c20-a5218cb71e31 is in state STARTED 2025-05-31 16:25:59.000402 | orchestrator | 2025-05-31 16:25:58 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:26:02.049116 | orchestrator | 2025-05-31 16:26:02 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:26:02.050306 | orchestrator | 2025-05-31 16:26:02 | INFO  | Task e54ad9f8-aa2f-404b-9324-844b63402d9b is in state STARTED 2025-05-31 16:26:02.053208 | orchestrator | 2025-05-31 16:26:02 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:26:02.053709 | orchestrator | 2025-05-31 16:26:02 | INFO  | Task 07a501bb-4e4a-4863-9c20-a5218cb71e31 is in state STARTED 2025-05-31 16:26:02.053738 | orchestrator | 2025-05-31 16:26:02 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:26:05.090996 | orchestrator | 2025-05-31 16:26:05 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:26:05.092139 | orchestrator | 2025-05-31 16:26:05 | INFO  | Task e54ad9f8-aa2f-404b-9324-844b63402d9b is in state STARTED 2025-05-31 16:26:05.093351 | orchestrator | 2025-05-31 16:26:05 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:26:05.095036 | orchestrator | 2025-05-31 16:26:05 | INFO  | Task 07a501bb-4e4a-4863-9c20-a5218cb71e31 is in state STARTED 2025-05-31 16:26:05.095083 | orchestrator | 2025-05-31 16:26:05 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:26:08.127543 | orchestrator | 2025-05-31 16:26:08 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:26:08.127629 | orchestrator | 2025-05-31 16:26:08 | INFO  | Task e54ad9f8-aa2f-404b-9324-844b63402d9b is in state STARTED 2025-05-31 16:26:08.127829 | orchestrator | 2025-05-31 16:26:08 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:26:08.128646 | orchestrator | 2025-05-31 16:26:08 | INFO  | Task 07a501bb-4e4a-4863-9c20-a5218cb71e31 is in state STARTED 2025-05-31 16:26:08.128673 | orchestrator | 2025-05-31 16:26:08 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:26:11.169667 | orchestrator | 2025-05-31 16:26:11 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:26:11.169755 | orchestrator | 2025-05-31 16:26:11 | INFO  | Task e54ad9f8-aa2f-404b-9324-844b63402d9b is in state STARTED 2025-05-31 16:26:11.169768 | orchestrator | 2025-05-31 16:26:11 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:26:11.169779 | orchestrator | 2025-05-31 16:26:11 | INFO  | Task 07a501bb-4e4a-4863-9c20-a5218cb71e31 is in state STARTED 2025-05-31 16:26:11.169790 | orchestrator | 2025-05-31 16:26:11 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:26:14.216536 | orchestrator | 2025-05-31 16:26:14 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:26:14.218332 | orchestrator | 2025-05-31 16:26:14 | INFO  | Task e54ad9f8-aa2f-404b-9324-844b63402d9b is in state STARTED 2025-05-31 16:26:14.222138 | orchestrator | 2025-05-31 16:26:14 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:26:14.222168 | orchestrator | 2025-05-31 16:26:14 | INFO  | Task 07a501bb-4e4a-4863-9c20-a5218cb71e31 is in state STARTED 2025-05-31 16:26:14.222179 | orchestrator | 2025-05-31 16:26:14 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:26:17.263293 | orchestrator | 2025-05-31 16:26:17 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:26:17.263598 | orchestrator | 2025-05-31 16:26:17 | INFO  | Task e54ad9f8-aa2f-404b-9324-844b63402d9b is in state STARTED 2025-05-31 16:26:17.264283 | orchestrator | 2025-05-31 16:26:17 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:26:17.265824 | orchestrator | 2025-05-31 16:26:17 | INFO  | Task 07a501bb-4e4a-4863-9c20-a5218cb71e31 is in state STARTED 2025-05-31 16:26:17.265888 | orchestrator | 2025-05-31 16:26:17 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:26:20.296013 | orchestrator | 2025-05-31 16:26:20 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:26:20.297850 | orchestrator | 2025-05-31 16:26:20 | INFO  | Task e54ad9f8-aa2f-404b-9324-844b63402d9b is in state STARTED 2025-05-31 16:26:20.298471 | orchestrator | 2025-05-31 16:26:20 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:26:20.299962 | orchestrator | 2025-05-31 16:26:20 | INFO  | Task 07a501bb-4e4a-4863-9c20-a5218cb71e31 is in state STARTED 2025-05-31 16:26:20.299990 | orchestrator | 2025-05-31 16:26:20 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:26:23.333186 | orchestrator | 2025-05-31 16:26:23 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:26:23.334401 | orchestrator | 2025-05-31 16:26:23 | INFO  | Task e54ad9f8-aa2f-404b-9324-844b63402d9b is in state STARTED 2025-05-31 16:26:23.335801 | orchestrator | 2025-05-31 16:26:23 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:26:23.336917 | orchestrator | 2025-05-31 16:26:23 | INFO  | Task 07a501bb-4e4a-4863-9c20-a5218cb71e31 is in state STARTED 2025-05-31 16:26:23.336940 | orchestrator | 2025-05-31 16:26:23 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:26:26.370066 | orchestrator | 2025-05-31 16:26:26 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:26:26.370555 | orchestrator | 2025-05-31 16:26:26 | INFO  | Task e54ad9f8-aa2f-404b-9324-844b63402d9b is in state STARTED 2025-05-31 16:26:26.372680 | orchestrator | 2025-05-31 16:26:26 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:26:26.374358 | orchestrator | 2025-05-31 16:26:26 | INFO  | Task 07a501bb-4e4a-4863-9c20-a5218cb71e31 is in state STARTED 2025-05-31 16:26:26.375128 | orchestrator | 2025-05-31 16:26:26 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:26:29.412493 | orchestrator | 2025-05-31 16:26:29 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:26:29.412831 | orchestrator | 2025-05-31 16:26:29 | INFO  | Task e54ad9f8-aa2f-404b-9324-844b63402d9b is in state STARTED 2025-05-31 16:26:29.414329 | orchestrator | 2025-05-31 16:26:29 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:26:29.415027 | orchestrator | 2025-05-31 16:26:29 | INFO  | Task 07a501bb-4e4a-4863-9c20-a5218cb71e31 is in state STARTED 2025-05-31 16:26:29.415124 | orchestrator | 2025-05-31 16:26:29 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:26:32.467029 | orchestrator | 2025-05-31 16:26:32 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:26:32.468498 | orchestrator | 2025-05-31 16:26:32 | INFO  | Task e54ad9f8-aa2f-404b-9324-844b63402d9b is in state STARTED 2025-05-31 16:26:32.470158 | orchestrator | 2025-05-31 16:26:32 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:26:32.471582 | orchestrator | 2025-05-31 16:26:32 | INFO  | Task 07a501bb-4e4a-4863-9c20-a5218cb71e31 is in state STARTED 2025-05-31 16:26:32.471826 | orchestrator | 2025-05-31 16:26:32 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:26:35.522272 | orchestrator | 2025-05-31 16:26:35 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:26:35.523412 | orchestrator | 2025-05-31 16:26:35 | INFO  | Task e54ad9f8-aa2f-404b-9324-844b63402d9b is in state STARTED 2025-05-31 16:26:35.524333 | orchestrator | 2025-05-31 16:26:35 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:26:35.525589 | orchestrator | 2025-05-31 16:26:35 | INFO  | Task 07a501bb-4e4a-4863-9c20-a5218cb71e31 is in state STARTED 2025-05-31 16:26:35.525611 | orchestrator | 2025-05-31 16:26:35 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:26:38.569928 | orchestrator | 2025-05-31 16:26:38 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:26:38.570576 | orchestrator | 2025-05-31 16:26:38 | INFO  | Task e54ad9f8-aa2f-404b-9324-844b63402d9b is in state STARTED 2025-05-31 16:26:38.572715 | orchestrator | 2025-05-31 16:26:38 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:26:38.575445 | orchestrator | 2025-05-31 16:26:38 | INFO  | Task 07a501bb-4e4a-4863-9c20-a5218cb71e31 is in state STARTED 2025-05-31 16:26:38.575490 | orchestrator | 2025-05-31 16:26:38 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:26:41.613011 | orchestrator | 2025-05-31 16:26:41 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:26:41.614086 | orchestrator | 2025-05-31 16:26:41 | INFO  | Task e54ad9f8-aa2f-404b-9324-844b63402d9b is in state STARTED 2025-05-31 16:26:41.615730 | orchestrator | 2025-05-31 16:26:41 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:26:41.617312 | orchestrator | 2025-05-31 16:26:41 | INFO  | Task 07a501bb-4e4a-4863-9c20-a5218cb71e31 is in state STARTED 2025-05-31 16:26:41.617349 | orchestrator | 2025-05-31 16:26:41 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:26:44.657137 | orchestrator | 2025-05-31 16:26:44 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:26:44.658259 | orchestrator | 2025-05-31 16:26:44 | INFO  | Task e54ad9f8-aa2f-404b-9324-844b63402d9b is in state STARTED 2025-05-31 16:26:44.659829 | orchestrator | 2025-05-31 16:26:44 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:26:44.661452 | orchestrator | 2025-05-31 16:26:44 | INFO  | Task 07a501bb-4e4a-4863-9c20-a5218cb71e31 is in state STARTED 2025-05-31 16:26:44.661488 | orchestrator | 2025-05-31 16:26:44 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:26:47.715376 | orchestrator | 2025-05-31 16:26:47 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:26:47.716033 | orchestrator | 2025-05-31 16:26:47 | INFO  | Task e54ad9f8-aa2f-404b-9324-844b63402d9b is in state STARTED 2025-05-31 16:26:47.717162 | orchestrator | 2025-05-31 16:26:47 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:26:47.718165 | orchestrator | 2025-05-31 16:26:47 | INFO  | Task 07a501bb-4e4a-4863-9c20-a5218cb71e31 is in state STARTED 2025-05-31 16:26:47.718190 | orchestrator | 2025-05-31 16:26:47 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:26:50.764428 | orchestrator | 2025-05-31 16:26:50 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:26:50.765724 | orchestrator | 2025-05-31 16:26:50 | INFO  | Task e54ad9f8-aa2f-404b-9324-844b63402d9b is in state STARTED 2025-05-31 16:26:50.767101 | orchestrator | 2025-05-31 16:26:50 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:26:50.771017 | orchestrator | 2025-05-31 16:26:50 | INFO  | Task 07a501bb-4e4a-4863-9c20-a5218cb71e31 is in state STARTED 2025-05-31 16:26:50.771110 | orchestrator | 2025-05-31 16:26:50 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:26:53.806732 | orchestrator | 2025-05-31 16:26:53 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:26:53.806841 | orchestrator | 2025-05-31 16:26:53 | INFO  | Task e54ad9f8-aa2f-404b-9324-844b63402d9b is in state STARTED 2025-05-31 16:26:53.807556 | orchestrator | 2025-05-31 16:26:53 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:26:53.808735 | orchestrator | 2025-05-31 16:26:53 | INFO  | Task 07a501bb-4e4a-4863-9c20-a5218cb71e31 is in state STARTED 2025-05-31 16:26:53.808751 | orchestrator | 2025-05-31 16:26:53 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:26:56.858952 | orchestrator | 2025-05-31 16:26:56 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:26:56.860907 | orchestrator | 2025-05-31 16:26:56 | INFO  | Task e54ad9f8-aa2f-404b-9324-844b63402d9b is in state STARTED 2025-05-31 16:26:56.863566 | orchestrator | 2025-05-31 16:26:56 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:26:56.865771 | orchestrator | 2025-05-31 16:26:56 | INFO  | Task 07a501bb-4e4a-4863-9c20-a5218cb71e31 is in state STARTED 2025-05-31 16:26:56.865814 | orchestrator | 2025-05-31 16:26:56 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:26:59.908415 | orchestrator | 2025-05-31 16:26:59 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:26:59.909664 | orchestrator | 2025-05-31 16:26:59 | INFO  | Task e54ad9f8-aa2f-404b-9324-844b63402d9b is in state STARTED 2025-05-31 16:26:59.911394 | orchestrator | 2025-05-31 16:26:59 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:26:59.913021 | orchestrator | 2025-05-31 16:26:59 | INFO  | Task 07a501bb-4e4a-4863-9c20-a5218cb71e31 is in state STARTED 2025-05-31 16:26:59.913047 | orchestrator | 2025-05-31 16:26:59 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:27:02.960290 | orchestrator | 2025-05-31 16:27:02 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:27:02.961620 | orchestrator | 2025-05-31 16:27:02 | INFO  | Task e54ad9f8-aa2f-404b-9324-844b63402d9b is in state STARTED 2025-05-31 16:27:02.963176 | orchestrator | 2025-05-31 16:27:02 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:27:02.964501 | orchestrator | 2025-05-31 16:27:02 | INFO  | Task 07a501bb-4e4a-4863-9c20-a5218cb71e31 is in state STARTED 2025-05-31 16:27:02.964526 | orchestrator | 2025-05-31 16:27:02 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:27:06.016668 | orchestrator | 2025-05-31 16:27:06 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:27:06.019495 | orchestrator | 2025-05-31 16:27:06 | INFO  | Task e54ad9f8-aa2f-404b-9324-844b63402d9b is in state STARTED 2025-05-31 16:27:06.019768 | orchestrator | 2025-05-31 16:27:06 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:27:06.020998 | orchestrator | 2025-05-31 16:27:06 | INFO  | Task 07a501bb-4e4a-4863-9c20-a5218cb71e31 is in state STARTED 2025-05-31 16:27:06.021022 | orchestrator | 2025-05-31 16:27:06 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:27:09.065152 | orchestrator | 2025-05-31 16:27:09 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:27:09.066225 | orchestrator | 2025-05-31 16:27:09 | INFO  | Task e54ad9f8-aa2f-404b-9324-844b63402d9b is in state STARTED 2025-05-31 16:27:09.067693 | orchestrator | 2025-05-31 16:27:09 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:27:09.069447 | orchestrator | 2025-05-31 16:27:09 | INFO  | Task 07a501bb-4e4a-4863-9c20-a5218cb71e31 is in state STARTED 2025-05-31 16:27:09.069564 | orchestrator | 2025-05-31 16:27:09 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:27:12.109404 | orchestrator | 2025-05-31 16:27:12 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:27:12.111084 | orchestrator | 2025-05-31 16:27:12 | INFO  | Task e54ad9f8-aa2f-404b-9324-844b63402d9b is in state STARTED 2025-05-31 16:27:12.112603 | orchestrator | 2025-05-31 16:27:12 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:27:12.114303 | orchestrator | 2025-05-31 16:27:12 | INFO  | Task 07a501bb-4e4a-4863-9c20-a5218cb71e31 is in state STARTED 2025-05-31 16:27:12.114332 | orchestrator | 2025-05-31 16:27:12 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:27:15.159282 | orchestrator | 2025-05-31 16:27:15 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:27:15.160622 | orchestrator | 2025-05-31 16:27:15 | INFO  | Task e54ad9f8-aa2f-404b-9324-844b63402d9b is in state STARTED 2025-05-31 16:27:15.163134 | orchestrator | 2025-05-31 16:27:15 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:27:15.165200 | orchestrator | 2025-05-31 16:27:15 | INFO  | Task 07a501bb-4e4a-4863-9c20-a5218cb71e31 is in state STARTED 2025-05-31 16:27:15.165307 | orchestrator | 2025-05-31 16:27:15 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:27:18.226606 | orchestrator | 2025-05-31 16:27:18 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:27:18.228791 | orchestrator | 2025-05-31 16:27:18 | INFO  | Task e54ad9f8-aa2f-404b-9324-844b63402d9b is in state STARTED 2025-05-31 16:27:18.230241 | orchestrator | 2025-05-31 16:27:18 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:27:18.231945 | orchestrator | 2025-05-31 16:27:18 | INFO  | Task 07a501bb-4e4a-4863-9c20-a5218cb71e31 is in state STARTED 2025-05-31 16:27:18.232236 | orchestrator | 2025-05-31 16:27:18 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:27:21.280671 | orchestrator | 2025-05-31 16:27:21 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:27:21.282607 | orchestrator | 2025-05-31 16:27:21 | INFO  | Task e54ad9f8-aa2f-404b-9324-844b63402d9b is in state STARTED 2025-05-31 16:27:21.284753 | orchestrator | 2025-05-31 16:27:21 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:27:21.286539 | orchestrator | 2025-05-31 16:27:21 | INFO  | Task 07a501bb-4e4a-4863-9c20-a5218cb71e31 is in state STARTED 2025-05-31 16:27:21.286566 | orchestrator | 2025-05-31 16:27:21 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:27:24.342436 | orchestrator | 2025-05-31 16:27:24 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:27:24.343911 | orchestrator | 2025-05-31 16:27:24 | INFO  | Task e54ad9f8-aa2f-404b-9324-844b63402d9b is in state STARTED 2025-05-31 16:27:24.345993 | orchestrator | 2025-05-31 16:27:24 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:27:24.347462 | orchestrator | 2025-05-31 16:27:24 | INFO  | Task 07a501bb-4e4a-4863-9c20-a5218cb71e31 is in state STARTED 2025-05-31 16:27:24.347487 | orchestrator | 2025-05-31 16:27:24 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:27:27.402126 | orchestrator | 2025-05-31 16:27:27 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:27:27.403708 | orchestrator | 2025-05-31 16:27:27 | INFO  | Task e54ad9f8-aa2f-404b-9324-844b63402d9b is in state STARTED 2025-05-31 16:27:27.405426 | orchestrator | 2025-05-31 16:27:27 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:27:27.407766 | orchestrator | 2025-05-31 16:27:27 | INFO  | Task 07a501bb-4e4a-4863-9c20-a5218cb71e31 is in state STARTED 2025-05-31 16:27:27.407794 | orchestrator | 2025-05-31 16:27:27 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:27:30.458156 | orchestrator | 2025-05-31 16:27:30 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:27:30.459954 | orchestrator | 2025-05-31 16:27:30 | INFO  | Task e54ad9f8-aa2f-404b-9324-844b63402d9b is in state STARTED 2025-05-31 16:27:30.461636 | orchestrator | 2025-05-31 16:27:30 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:27:30.462758 | orchestrator | 2025-05-31 16:27:30 | INFO  | Task 07a501bb-4e4a-4863-9c20-a5218cb71e31 is in state STARTED 2025-05-31 16:27:30.462784 | orchestrator | 2025-05-31 16:27:30 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:27:33.517766 | orchestrator | 2025-05-31 16:27:33 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:27:33.519398 | orchestrator | 2025-05-31 16:27:33 | INFO  | Task e54ad9f8-aa2f-404b-9324-844b63402d9b is in state STARTED 2025-05-31 16:27:33.521518 | orchestrator | 2025-05-31 16:27:33 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:27:33.523451 | orchestrator | 2025-05-31 16:27:33 | INFO  | Task 07a501bb-4e4a-4863-9c20-a5218cb71e31 is in state SUCCESS 2025-05-31 16:27:33.525377 | orchestrator | 2025-05-31 16:27:33.525415 | orchestrator | 2025-05-31 16:27:33.525427 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-31 16:27:33.525438 | orchestrator | 2025-05-31 16:27:33.525449 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-31 16:27:33.525461 | orchestrator | Saturday 31 May 2025 16:25:56 +0000 (0:00:00.291) 0:00:00.291 ********** 2025-05-31 16:27:33.525638 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:27:33.525655 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:27:33.525667 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:27:33.525678 | orchestrator | 2025-05-31 16:27:33.525689 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-31 16:27:33.525715 | orchestrator | Saturday 31 May 2025 16:25:56 +0000 (0:00:00.386) 0:00:00.677 ********** 2025-05-31 16:27:33.525726 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-05-31 16:27:33.525737 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-05-31 16:27:33.525748 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-05-31 16:27:33.525758 | orchestrator | 2025-05-31 16:27:33.525769 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-05-31 16:27:33.525780 | orchestrator | 2025-05-31 16:27:33.525790 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-31 16:27:33.525801 | orchestrator | Saturday 31 May 2025 16:25:57 +0000 (0:00:00.283) 0:00:00.961 ********** 2025-05-31 16:27:33.525812 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:27:33.525822 | orchestrator | 2025-05-31 16:27:33.525833 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-05-31 16:27:33.525875 | orchestrator | Saturday 31 May 2025 16:25:57 +0000 (0:00:00.737) 0:00:01.698 ********** 2025-05-31 16:27:33.525892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-31 16:27:33.525957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-31 16:27:33.525972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-31 16:27:33.525993 | orchestrator | 2025-05-31 16:27:33.526005 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-05-31 16:27:33.526063 | orchestrator | Saturday 31 May 2025 16:25:59 +0000 (0:00:01.736) 0:00:03.434 ********** 2025-05-31 16:27:33.526077 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:27:33.526088 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:27:33.526099 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:27:33.526109 | orchestrator | 2025-05-31 16:27:33.526120 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-31 16:27:33.526131 | orchestrator | Saturday 31 May 2025 16:25:59 +0000 (0:00:00.265) 0:00:03.700 ********** 2025-05-31 16:27:33.526151 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-31 16:27:33.526162 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-05-31 16:27:33.526173 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-05-31 16:27:33.526184 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-05-31 16:27:33.526194 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-05-31 16:27:33.526210 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-05-31 16:27:33.526221 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-05-31 16:27:33.526232 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-31 16:27:33.526242 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-05-31 16:27:33.526253 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-05-31 16:27:33.526264 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-05-31 16:27:33.526276 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-05-31 16:27:33.526328 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-05-31 16:27:33.526341 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-05-31 16:27:33.526353 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-31 16:27:33.526365 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-05-31 16:27:33.526377 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-05-31 16:27:33.526388 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-05-31 16:27:33.526400 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-05-31 16:27:33.526412 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-05-31 16:27:33.526424 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-05-31 16:27:33.526438 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-05-31 16:27:33.526452 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-05-31 16:27:33.526463 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-05-31 16:27:33.526473 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-05-31 16:27:33.526484 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'heat', 'enabled': True}) 2025-05-31 16:27:33.526495 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-05-31 16:27:33.526506 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-05-31 16:27:33.526516 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-05-31 16:27:33.526527 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-05-31 16:27:33.526537 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-05-31 16:27:33.526548 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-05-31 16:27:33.526559 | orchestrator | 2025-05-31 16:27:33.526569 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-31 16:27:33.526580 | orchestrator | Saturday 31 May 2025 16:26:00 +0000 (0:00:00.983) 0:00:04.684 ********** 2025-05-31 16:27:33.526591 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:27:33.526602 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:27:33.526612 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:27:33.526623 | orchestrator | 2025-05-31 16:27:33.526633 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-31 16:27:33.526644 | orchestrator | Saturday 31 May 2025 16:26:01 +0000 (0:00:00.460) 0:00:05.144 ********** 2025-05-31 16:27:33.526655 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:27:33.526667 | orchestrator | 2025-05-31 16:27:33.526683 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-31 16:27:33.526701 | orchestrator | Saturday 31 May 2025 16:26:01 +0000 (0:00:00.133) 0:00:05.278 ********** 2025-05-31 16:27:33.526712 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:27:33.526723 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:27:33.526734 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:27:33.526744 | orchestrator | 2025-05-31 16:27:33.526755 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-31 16:27:33.526765 | orchestrator | Saturday 31 May 2025 16:26:01 +0000 (0:00:00.257) 0:00:05.535 ********** 2025-05-31 16:27:33.526776 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:27:33.526787 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:27:33.526802 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:27:33.526813 | orchestrator | 2025-05-31 16:27:33.526823 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-31 16:27:33.526834 | orchestrator | Saturday 31 May 2025 16:26:02 +0000 (0:00:00.484) 0:00:06.019 ********** 2025-05-31 16:27:33.526883 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:27:33.526894 | orchestrator | 2025-05-31 16:27:33.526905 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-31 16:27:33.526915 | orchestrator | Saturday 31 May 2025 16:26:02 +0000 (0:00:00.106) 0:00:06.126 ********** 2025-05-31 16:27:33.526925 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:27:33.526936 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:27:33.526947 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:27:33.526957 | orchestrator | 2025-05-31 16:27:33.526967 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-31 16:27:33.526978 | orchestrator | Saturday 31 May 2025 16:26:02 +0000 (0:00:00.487) 0:00:06.613 ********** 2025-05-31 16:27:33.526998 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:27:33.527011 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:27:33.527021 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:27:33.527031 | orchestrator | 2025-05-31 16:27:33.527042 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-31 16:27:33.527056 | orchestrator | Saturday 31 May 2025 16:26:03 +0000 (0:00:00.473) 0:00:07.086 ********** 2025-05-31 16:27:33.527072 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:27:33.527083 | orchestrator | 2025-05-31 16:27:33.527094 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-31 16:27:33.527105 | orchestrator | Saturday 31 May 2025 16:26:03 +0000 (0:00:00.115) 0:00:07.202 ********** 2025-05-31 16:27:33.527115 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:27:33.527126 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:27:33.527137 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:27:33.527147 | orchestrator | 2025-05-31 16:27:33.527157 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-31 16:27:33.527168 | orchestrator | Saturday 31 May 2025 16:26:03 +0000 (0:00:00.393) 0:00:07.595 ********** 2025-05-31 16:27:33.527178 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:27:33.527189 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:27:33.527200 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:27:33.527288 | orchestrator | 2025-05-31 16:27:33.527301 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-31 16:27:33.527312 | orchestrator | Saturday 31 May 2025 16:26:04 +0000 (0:00:00.458) 0:00:08.054 ********** 2025-05-31 16:27:33.527323 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:27:33.527333 | orchestrator | 2025-05-31 16:27:33.527344 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-31 16:27:33.527355 | orchestrator | Saturday 31 May 2025 16:26:04 +0000 (0:00:00.123) 0:00:08.178 ********** 2025-05-31 16:27:33.527365 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:27:33.527376 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:27:33.527386 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:27:33.527397 | orchestrator | 2025-05-31 16:27:33.527407 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-31 16:27:33.527418 | orchestrator | Saturday 31 May 2025 16:26:04 +0000 (0:00:00.415) 0:00:08.593 ********** 2025-05-31 16:27:33.527437 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:27:33.527447 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:27:33.527458 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:27:33.527468 | orchestrator | 2025-05-31 16:27:33.527479 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-31 16:27:33.527489 | orchestrator | Saturday 31 May 2025 16:26:04 +0000 (0:00:00.324) 0:00:08.918 ********** 2025-05-31 16:27:33.527500 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:27:33.527510 | orchestrator | 2025-05-31 16:27:33.527521 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-31 16:27:33.527531 | orchestrator | Saturday 31 May 2025 16:26:05 +0000 (0:00:00.240) 0:00:09.158 ********** 2025-05-31 16:27:33.527542 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:27:33.527552 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:27:33.527563 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:27:33.527573 | orchestrator | 2025-05-31 16:27:33.527584 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-31 16:27:33.527594 | orchestrator | Saturday 31 May 2025 16:26:05 +0000 (0:00:00.284) 0:00:09.443 ********** 2025-05-31 16:27:33.527605 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:27:33.527615 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:27:33.527626 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:27:33.527636 | orchestrator | 2025-05-31 16:27:33.527647 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-31 16:27:33.527657 | orchestrator | Saturday 31 May 2025 16:26:05 +0000 (0:00:00.432) 0:00:09.875 ********** 2025-05-31 16:27:33.527668 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:27:33.527678 | orchestrator | 2025-05-31 16:27:33.527689 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-31 16:27:33.527699 | orchestrator | Saturday 31 May 2025 16:26:06 +0000 (0:00:00.124) 0:00:09.999 ********** 2025-05-31 16:27:33.527710 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:27:33.527720 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:27:33.527731 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:27:33.527741 | orchestrator | 2025-05-31 16:27:33.527752 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-31 16:27:33.527763 | orchestrator | Saturday 31 May 2025 16:26:06 +0000 (0:00:00.507) 0:00:10.507 ********** 2025-05-31 16:27:33.527780 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:27:33.527792 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:27:33.527802 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:27:33.527813 | orchestrator | 2025-05-31 16:27:33.527823 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-31 16:27:33.527834 | orchestrator | Saturday 31 May 2025 16:26:07 +0000 (0:00:00.522) 0:00:11.029 ********** 2025-05-31 16:27:33.527932 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:27:33.527944 | orchestrator | 2025-05-31 16:27:33.527955 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-31 16:27:33.527965 | orchestrator | Saturday 31 May 2025 16:26:07 +0000 (0:00:00.115) 0:00:11.145 ********** 2025-05-31 16:27:33.527982 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:27:33.527993 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:27:33.528004 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:27:33.528014 | orchestrator | 2025-05-31 16:27:33.528025 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-31 16:27:33.528035 | orchestrator | Saturday 31 May 2025 16:26:07 +0000 (0:00:00.412) 0:00:11.558 ********** 2025-05-31 16:27:33.528046 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:27:33.528056 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:27:33.528067 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:27:33.528077 | orchestrator | 2025-05-31 16:27:33.528088 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-31 16:27:33.528098 | orchestrator | Saturday 31 May 2025 16:26:07 +0000 (0:00:00.333) 0:00:11.892 ********** 2025-05-31 16:27:33.528116 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:27:33.528127 | orchestrator | 2025-05-31 16:27:33.528137 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-31 16:27:33.528148 | orchestrator | Saturday 31 May 2025 16:26:08 +0000 (0:00:00.339) 0:00:12.231 ********** 2025-05-31 16:27:33.528159 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:27:33.528169 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:27:33.528180 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:27:33.528190 | orchestrator | 2025-05-31 16:27:33.528201 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-31 16:27:33.528211 | orchestrator | Saturday 31 May 2025 16:26:08 +0000 (0:00:00.276) 0:00:12.508 ********** 2025-05-31 16:27:33.528222 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:27:33.528233 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:27:33.528243 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:27:33.528254 | orchestrator | 2025-05-31 16:27:33.528264 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-31 16:27:33.528274 | orchestrator | Saturday 31 May 2025 16:26:09 +0000 (0:00:00.440) 0:00:12.948 ********** 2025-05-31 16:27:33.528293 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:27:33.528310 | orchestrator | 2025-05-31 16:27:33.528327 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-31 16:27:33.528345 | orchestrator | Saturday 31 May 2025 16:26:09 +0000 (0:00:00.162) 0:00:13.111 ********** 2025-05-31 16:27:33.528362 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:27:33.528378 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:27:33.528395 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:27:33.528413 | orchestrator | 2025-05-31 16:27:33.528428 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-31 16:27:33.528444 | orchestrator | Saturday 31 May 2025 16:26:09 +0000 (0:00:00.428) 0:00:13.539 ********** 2025-05-31 16:27:33.528460 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:27:33.528477 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:27:33.528493 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:27:33.528509 | orchestrator | 2025-05-31 16:27:33.528521 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-31 16:27:33.528531 | orchestrator | Saturday 31 May 2025 16:26:10 +0000 (0:00:00.462) 0:00:14.001 ********** 2025-05-31 16:27:33.528540 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:27:33.528550 | orchestrator | 2025-05-31 16:27:33.528559 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-31 16:27:33.528568 | orchestrator | Saturday 31 May 2025 16:26:10 +0000 (0:00:00.115) 0:00:14.117 ********** 2025-05-31 16:27:33.528582 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:27:33.528596 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:27:33.528606 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:27:33.528615 | orchestrator | 2025-05-31 16:27:33.528624 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-31 16:27:33.528634 | orchestrator | Saturday 31 May 2025 16:26:10 +0000 (0:00:00.468) 0:00:14.585 ********** 2025-05-31 16:27:33.528643 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:27:33.528652 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:27:33.528662 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:27:33.528671 | orchestrator | 2025-05-31 16:27:33.528680 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-31 16:27:33.528690 | orchestrator | Saturday 31 May 2025 16:26:11 +0000 (0:00:00.647) 0:00:15.233 ********** 2025-05-31 16:27:33.528699 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:27:33.528708 | orchestrator | 2025-05-31 16:27:33.528718 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-31 16:27:33.528727 | orchestrator | Saturday 31 May 2025 16:26:11 +0000 (0:00:00.312) 0:00:15.546 ********** 2025-05-31 16:27:33.528736 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:27:33.528746 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:27:33.528755 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:27:33.528772 | orchestrator | 2025-05-31 16:27:33.528781 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-05-31 16:27:33.528791 | orchestrator | Saturday 31 May 2025 16:26:12 +0000 (0:00:00.821) 0:00:16.368 ********** 2025-05-31 16:27:33.528800 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:27:33.528810 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:27:33.528819 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:27:33.528829 | orchestrator | 2025-05-31 16:27:33.528861 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-05-31 16:27:33.528872 | orchestrator | Saturday 31 May 2025 16:26:15 +0000 (0:00:03.312) 0:00:19.680 ********** 2025-05-31 16:27:33.528882 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-31 16:27:33.528899 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-31 16:27:33.528909 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-31 16:27:33.528919 | orchestrator | 2025-05-31 16:27:33.528928 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-05-31 16:27:33.528937 | orchestrator | Saturday 31 May 2025 16:26:18 +0000 (0:00:02.943) 0:00:22.624 ********** 2025-05-31 16:27:33.528947 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-31 16:27:33.528963 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-31 16:27:33.528973 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-31 16:27:33.528982 | orchestrator | 2025-05-31 16:27:33.528992 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-05-31 16:27:33.529001 | orchestrator | Saturday 31 May 2025 16:26:21 +0000 (0:00:02.594) 0:00:25.219 ********** 2025-05-31 16:27:33.529011 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-31 16:27:33.529020 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-31 16:27:33.529030 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-31 16:27:33.529039 | orchestrator | 2025-05-31 16:27:33.529048 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-05-31 16:27:33.529058 | orchestrator | Saturday 31 May 2025 16:26:23 +0000 (0:00:01.924) 0:00:27.143 ********** 2025-05-31 16:27:33.529067 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:27:33.529076 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:27:33.529086 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:27:33.529095 | orchestrator | 2025-05-31 16:27:33.529104 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-05-31 16:27:33.529114 | orchestrator | Saturday 31 May 2025 16:26:23 +0000 (0:00:00.293) 0:00:27.437 ********** 2025-05-31 16:27:33.529123 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:27:33.529133 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:27:33.529142 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:27:33.529151 | orchestrator | 2025-05-31 16:27:33.529161 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-31 16:27:33.529170 | orchestrator | Saturday 31 May 2025 16:26:23 +0000 (0:00:00.282) 0:00:27.719 ********** 2025-05-31 16:27:33.529180 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:27:33.529190 | orchestrator | 2025-05-31 16:27:33.529208 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-05-31 16:27:33.529223 | orchestrator | Saturday 31 May 2025 16:26:24 +0000 (0:00:00.535) 0:00:28.255 ********** 2025-05-31 16:27:33.529251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-31 16:27:33.529348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-31 16:27:33.529382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-31 16:27:33.529394 | orchestrator | 2025-05-31 16:27:33.529408 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-05-31 16:27:33.529418 | orchestrator | Saturday 31 May 2025 16:26:25 +0000 (0:00:01.466) 0:00:29.721 ********** 2025-05-31 16:27:33.529429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-31 16:27:33.529445 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:27:33.529462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backen2025-05-31 16:27:33 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:27:33.529479 | orchestrator | d': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-31 16:27:33.529489 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:27:33.529500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-31 16:27:33.529516 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:27:33.529526 | orchestrator | 2025-05-31 16:27:33.529535 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-05-31 16:27:33.529545 | orchestrator | Saturday 31 May 2025 16:26:26 +0000 (0:00:00.716) 0:00:30.438 ********** 2025-05-31 16:27:33.529568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-31 16:27:33.529579 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:27:33.529590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-31 16:27:33.529606 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:27:33.529630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-31 16:27:33.529646 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:27:33.529656 | orchestrator | 2025-05-31 16:27:33.529665 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-05-31 16:27:33.529690 | orchestrator | Saturday 31 May 2025 16:26:27 +0000 (0:00:00.920) 0:00:31.358 ********** 2025-05-31 16:27:33.529717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-31 16:27:33.529734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-31 16:27:33.529775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-31 16:27:33.529788 | orchestrator | 2025-05-31 16:27:33.529813 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-31 16:27:33.529824 | orchestrator | Saturday 31 May 2025 16:26:31 +0000 (0:00:04.223) 0:00:35.581 ********** 2025-05-31 16:27:33.529834 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:27:33.529863 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:27:33.529873 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:27:33.529882 | orchestrator | 2025-05-31 16:27:33.529892 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-31 16:27:33.529901 | orchestrator | Saturday 31 May 2025 16:26:32 +0000 (0:00:00.416) 0:00:35.998 ********** 2025-05-31 16:27:33.529910 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:27:33.529920 | orchestrator | 2025-05-31 16:27:33.529929 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-05-31 16:27:33.529939 | orchestrator | Saturday 31 May 2025 16:26:32 +0000 (0:00:00.565) 0:00:36.563 ********** 2025-05-31 16:27:33.529954 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:27:33.529963 | orchestrator | 2025-05-31 16:27:33.529973 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-05-31 16:27:33.529982 | orchestrator | Saturday 31 May 2025 16:26:35 +0000 (0:00:02.579) 0:00:39.143 ********** 2025-05-31 16:27:33.529992 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:27:33.530001 | orchestrator | 2025-05-31 16:27:33.530011 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-05-31 16:27:33.530069 | orchestrator | Saturday 31 May 2025 16:26:37 +0000 (0:00:02.405) 0:00:41.549 ********** 2025-05-31 16:27:33.530079 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:27:33.530088 | orchestrator | 2025-05-31 16:27:33.530097 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-31 16:27:33.530107 | orchestrator | Saturday 31 May 2025 16:26:52 +0000 (0:00:14.384) 0:00:55.933 ********** 2025-05-31 16:27:33.530116 | orchestrator | 2025-05-31 16:27:33.530125 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-31 16:27:33.530135 | orchestrator | Saturday 31 May 2025 16:26:52 +0000 (0:00:00.053) 0:00:55.986 ********** 2025-05-31 16:27:33.530144 | orchestrator | 2025-05-31 16:27:33.530153 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-31 16:27:33.530163 | orchestrator | Saturday 31 May 2025 16:26:52 +0000 (0:00:00.159) 0:00:56.146 ********** 2025-05-31 16:27:33.530172 | orchestrator | 2025-05-31 16:27:33.530181 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-05-31 16:27:33.530190 | orchestrator | Saturday 31 May 2025 16:26:52 +0000 (0:00:00.056) 0:00:56.202 ********** 2025-05-31 16:27:33.530199 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:27:33.530209 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:27:33.530218 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:27:33.530228 | orchestrator | 2025-05-31 16:27:33.530237 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:27:33.530247 | orchestrator | testbed-node-0 : ok=39  changed=11  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-05-31 16:27:33.530256 | orchestrator | testbed-node-1 : ok=36  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-31 16:27:33.530266 | orchestrator | testbed-node-2 : ok=36  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-31 16:27:33.530276 | orchestrator | 2025-05-31 16:27:33.530285 | orchestrator | 2025-05-31 16:27:33.530294 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 16:27:33.530304 | orchestrator | Saturday 31 May 2025 16:27:30 +0000 (0:00:38.191) 0:01:34.394 ********** 2025-05-31 16:27:33.530313 | orchestrator | =============================================================================== 2025-05-31 16:27:33.530322 | orchestrator | horizon : Restart horizon container ------------------------------------ 38.19s 2025-05-31 16:27:33.530332 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 14.38s 2025-05-31 16:27:33.530341 | orchestrator | horizon : Deploy horizon container -------------------------------------- 4.22s 2025-05-31 16:27:33.530350 | orchestrator | horizon : Copying over config.json files for services ------------------- 3.31s 2025-05-31 16:27:33.530360 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.94s 2025-05-31 16:27:33.530369 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.59s 2025-05-31 16:27:33.530378 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.58s 2025-05-31 16:27:33.530388 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.41s 2025-05-31 16:27:33.530397 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.92s 2025-05-31 16:27:33.530406 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.74s 2025-05-31 16:27:33.530422 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.47s 2025-05-31 16:27:33.530431 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.98s 2025-05-31 16:27:33.530446 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.92s 2025-05-31 16:27:33.530456 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.82s 2025-05-31 16:27:33.530466 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.74s 2025-05-31 16:27:33.530475 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.72s 2025-05-31 16:27:33.530484 | orchestrator | horizon : Update policy file name --------------------------------------- 0.65s 2025-05-31 16:27:33.530502 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.57s 2025-05-31 16:27:33.530512 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.54s 2025-05-31 16:27:33.530521 | orchestrator | horizon : Update policy file name --------------------------------------- 0.52s 2025-05-31 16:27:36.571368 | orchestrator | 2025-05-31 16:27:36 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:27:36.573197 | orchestrator | 2025-05-31 16:27:36 | INFO  | Task e54ad9f8-aa2f-404b-9324-844b63402d9b is in state STARTED 2025-05-31 16:27:36.574341 | orchestrator | 2025-05-31 16:27:36 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:27:36.574758 | orchestrator | 2025-05-31 16:27:36 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:27:39.623808 | orchestrator | 2025-05-31 16:27:39 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:27:39.625792 | orchestrator | 2025-05-31 16:27:39 | INFO  | Task e54ad9f8-aa2f-404b-9324-844b63402d9b is in state STARTED 2025-05-31 16:27:39.627823 | orchestrator | 2025-05-31 16:27:39 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:27:39.627928 | orchestrator | 2025-05-31 16:27:39 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:27:42.677276 | orchestrator | 2025-05-31 16:27:42 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:27:42.679021 | orchestrator | 2025-05-31 16:27:42 | INFO  | Task e54ad9f8-aa2f-404b-9324-844b63402d9b is in state STARTED 2025-05-31 16:27:42.682086 | orchestrator | 2025-05-31 16:27:42 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:27:42.682119 | orchestrator | 2025-05-31 16:27:42 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:27:45.735599 | orchestrator | 2025-05-31 16:27:45 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:27:45.738636 | orchestrator | 2025-05-31 16:27:45 | INFO  | Task e54ad9f8-aa2f-404b-9324-844b63402d9b is in state STARTED 2025-05-31 16:27:45.739750 | orchestrator | 2025-05-31 16:27:45 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:27:45.739956 | orchestrator | 2025-05-31 16:27:45 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:27:48.794524 | orchestrator | 2025-05-31 16:27:48 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:27:48.797100 | orchestrator | 2025-05-31 16:27:48 | INFO  | Task e54ad9f8-aa2f-404b-9324-844b63402d9b is in state STARTED 2025-05-31 16:27:48.798721 | orchestrator | 2025-05-31 16:27:48 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:27:48.798748 | orchestrator | 2025-05-31 16:27:48 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:27:51.845984 | orchestrator | 2025-05-31 16:27:51 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:27:51.847642 | orchestrator | 2025-05-31 16:27:51 | INFO  | Task e54ad9f8-aa2f-404b-9324-844b63402d9b is in state STARTED 2025-05-31 16:27:51.848687 | orchestrator | 2025-05-31 16:27:51 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:27:51.848721 | orchestrator | 2025-05-31 16:27:51 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:27:54.909489 | orchestrator | 2025-05-31 16:27:54 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:27:54.911265 | orchestrator | 2025-05-31 16:27:54 | INFO  | Task e54ad9f8-aa2f-404b-9324-844b63402d9b is in state SUCCESS 2025-05-31 16:27:54.912600 | orchestrator | 2025-05-31 16:27:54.912636 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-31 16:27:54.912648 | orchestrator | 2025-05-31 16:27:54.912659 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-05-31 16:27:54.913018 | orchestrator | 2025-05-31 16:27:54.913031 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-05-31 16:27:54.913043 | orchestrator | Saturday 31 May 2025 16:25:47 +0000 (0:00:01.101) 0:00:01.101 ********** 2025-05-31 16:27:54.913055 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:27:54.913067 | orchestrator | 2025-05-31 16:27:54.913078 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-05-31 16:27:54.913089 | orchestrator | Saturday 31 May 2025 16:25:48 +0000 (0:00:00.508) 0:00:01.609 ********** 2025-05-31 16:27:54.913100 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-0) 2025-05-31 16:27:54.913111 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-1) 2025-05-31 16:27:54.913122 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-2) 2025-05-31 16:27:54.913132 | orchestrator | 2025-05-31 16:27:54.913143 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-05-31 16:27:54.913230 | orchestrator | Saturday 31 May 2025 16:25:49 +0000 (0:00:00.809) 0:00:02.419 ********** 2025-05-31 16:27:54.913246 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:27:54.913257 | orchestrator | 2025-05-31 16:27:54.913268 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-05-31 16:27:54.913279 | orchestrator | Saturday 31 May 2025 16:25:49 +0000 (0:00:00.677) 0:00:03.096 ********** 2025-05-31 16:27:54.913289 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:27:54.913300 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:27:54.913310 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:27:54.913321 | orchestrator | 2025-05-31 16:27:54.913332 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-05-31 16:27:54.913342 | orchestrator | Saturday 31 May 2025 16:25:50 +0000 (0:00:00.717) 0:00:03.814 ********** 2025-05-31 16:27:54.913353 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:27:54.913363 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:27:54.913373 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:27:54.913384 | orchestrator | 2025-05-31 16:27:54.913395 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-05-31 16:27:54.913405 | orchestrator | Saturday 31 May 2025 16:25:50 +0000 (0:00:00.309) 0:00:04.123 ********** 2025-05-31 16:27:54.913415 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:27:54.913426 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:27:54.913436 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:27:54.913447 | orchestrator | 2025-05-31 16:27:54.913458 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-05-31 16:27:54.913468 | orchestrator | Saturday 31 May 2025 16:25:51 +0000 (0:00:00.857) 0:00:04.981 ********** 2025-05-31 16:27:54.913479 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:27:54.913490 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:27:54.913501 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:27:54.913531 | orchestrator | 2025-05-31 16:27:54.913542 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-05-31 16:27:54.913553 | orchestrator | Saturday 31 May 2025 16:25:52 +0000 (0:00:00.298) 0:00:05.279 ********** 2025-05-31 16:27:54.913563 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:27:54.913574 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:27:54.913584 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:27:54.913594 | orchestrator | 2025-05-31 16:27:54.913604 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-05-31 16:27:54.913615 | orchestrator | Saturday 31 May 2025 16:25:52 +0000 (0:00:00.299) 0:00:05.579 ********** 2025-05-31 16:27:54.913628 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:27:54.913641 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:27:54.913652 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:27:54.913664 | orchestrator | 2025-05-31 16:27:54.913676 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-05-31 16:27:54.913688 | orchestrator | Saturday 31 May 2025 16:25:52 +0000 (0:00:00.303) 0:00:05.882 ********** 2025-05-31 16:27:54.913700 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:27:54.913713 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:27:54.913725 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:27:54.913737 | orchestrator | 2025-05-31 16:27:54.913749 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-05-31 16:27:54.913760 | orchestrator | Saturday 31 May 2025 16:25:53 +0000 (0:00:00.490) 0:00:06.373 ********** 2025-05-31 16:27:54.913773 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:27:54.913785 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:27:54.913797 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:27:54.913808 | orchestrator | 2025-05-31 16:27:54.913820 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-31 16:27:54.913855 | orchestrator | Saturday 31 May 2025 16:25:53 +0000 (0:00:00.289) 0:00:06.663 ********** 2025-05-31 16:27:54.913867 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-31 16:27:54.913879 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-31 16:27:54.913891 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-31 16:27:54.913903 | orchestrator | 2025-05-31 16:27:54.913915 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-05-31 16:27:54.913927 | orchestrator | Saturday 31 May 2025 16:25:54 +0000 (0:00:00.650) 0:00:07.313 ********** 2025-05-31 16:27:54.913939 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:27:54.913950 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:27:54.913963 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:27:54.913975 | orchestrator | 2025-05-31 16:27:54.913987 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-05-31 16:27:54.913997 | orchestrator | Saturday 31 May 2025 16:25:54 +0000 (0:00:00.497) 0:00:07.811 ********** 2025-05-31 16:27:54.914067 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-31 16:27:54.914082 | orchestrator | changed: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-31 16:27:54.914092 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-31 16:27:54.914103 | orchestrator | 2025-05-31 16:27:54.914114 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-05-31 16:27:54.914124 | orchestrator | Saturday 31 May 2025 16:25:56 +0000 (0:00:02.314) 0:00:10.125 ********** 2025-05-31 16:27:54.914135 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-31 16:27:54.914145 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-31 16:27:54.914156 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-31 16:27:54.914167 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:27:54.914177 | orchestrator | 2025-05-31 16:27:54.914188 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-05-31 16:27:54.914204 | orchestrator | Saturday 31 May 2025 16:25:57 +0000 (0:00:00.432) 0:00:10.557 ********** 2025-05-31 16:27:54.914283 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-31 16:27:54.914300 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-31 16:27:54.914312 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-31 16:27:54.914323 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:27:54.914333 | orchestrator | 2025-05-31 16:27:54.914344 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-05-31 16:27:54.914354 | orchestrator | Saturday 31 May 2025 16:25:57 +0000 (0:00:00.627) 0:00:11.185 ********** 2025-05-31 16:27:54.914367 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-31 16:27:54.914382 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-31 16:27:54.914393 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-31 16:27:54.914404 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:27:54.914415 | orchestrator | 2025-05-31 16:27:54.914425 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-05-31 16:27:54.914436 | orchestrator | Saturday 31 May 2025 16:25:58 +0000 (0:00:00.166) 0:00:11.352 ********** 2025-05-31 16:27:54.914449 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '3fa7fc57c0c6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-31 16:25:55.438978', 'end': '2025-05-31 16:25:55.484625', 'delta': '0:00:00.045647', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3fa7fc57c0c6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-05-31 16:27:54.914474 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '6c7f55223df5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-31 16:25:55.997861', 'end': '2025-05-31 16:25:56.043746', 'delta': '0:00:00.045885', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6c7f55223df5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-05-31 16:27:54.914499 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '427a0ac582ac', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-31 16:25:56.528157', 'end': '2025-05-31 16:25:56.568690', 'delta': '0:00:00.040533', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['427a0ac582ac'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-05-31 16:27:54.914511 | orchestrator | 2025-05-31 16:27:54.914522 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-05-31 16:27:54.914532 | orchestrator | Saturday 31 May 2025 16:25:58 +0000 (0:00:00.203) 0:00:11.556 ********** 2025-05-31 16:27:54.914543 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:27:54.914554 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:27:54.914564 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:27:54.914575 | orchestrator | 2025-05-31 16:27:54.914585 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-05-31 16:27:54.914596 | orchestrator | Saturday 31 May 2025 16:25:58 +0000 (0:00:00.558) 0:00:12.114 ********** 2025-05-31 16:27:54.914606 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-05-31 16:27:54.914617 | orchestrator | 2025-05-31 16:27:54.914627 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-05-31 16:27:54.914638 | orchestrator | Saturday 31 May 2025 16:26:00 +0000 (0:00:01.371) 0:00:13.486 ********** 2025-05-31 16:27:54.914649 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:27:54.914659 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:27:54.914670 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:27:54.914681 | orchestrator | 2025-05-31 16:27:54.914691 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-05-31 16:27:54.914702 | orchestrator | Saturday 31 May 2025 16:26:00 +0000 (0:00:00.464) 0:00:13.951 ********** 2025-05-31 16:27:54.914712 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:27:54.914723 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:27:54.914733 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:27:54.914743 | orchestrator | 2025-05-31 16:27:54.914754 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-31 16:27:54.914764 | orchestrator | Saturday 31 May 2025 16:26:01 +0000 (0:00:00.482) 0:00:14.433 ********** 2025-05-31 16:27:54.914775 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:27:54.914786 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:27:54.914796 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:27:54.914807 | orchestrator | 2025-05-31 16:27:54.914817 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-05-31 16:27:54.914858 | orchestrator | Saturday 31 May 2025 16:26:01 +0000 (0:00:00.288) 0:00:14.722 ********** 2025-05-31 16:27:54.914870 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:27:54.914881 | orchestrator | 2025-05-31 16:27:54.914892 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-05-31 16:27:54.914902 | orchestrator | Saturday 31 May 2025 16:26:01 +0000 (0:00:00.126) 0:00:14.849 ********** 2025-05-31 16:27:54.914913 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:27:54.914923 | orchestrator | 2025-05-31 16:27:54.914934 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-31 16:27:54.914944 | orchestrator | Saturday 31 May 2025 16:26:01 +0000 (0:00:00.241) 0:00:15.091 ********** 2025-05-31 16:27:54.914960 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:27:54.914971 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:27:54.914981 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:27:54.914992 | orchestrator | 2025-05-31 16:27:54.915003 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-05-31 16:27:54.915013 | orchestrator | Saturday 31 May 2025 16:26:02 +0000 (0:00:00.504) 0:00:15.595 ********** 2025-05-31 16:27:54.915024 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:27:54.915035 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:27:54.915045 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:27:54.915056 | orchestrator | 2025-05-31 16:27:54.915067 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-05-31 16:27:54.915077 | orchestrator | Saturday 31 May 2025 16:26:02 +0000 (0:00:00.346) 0:00:15.942 ********** 2025-05-31 16:27:54.915088 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:27:54.915098 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:27:54.915109 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:27:54.915119 | orchestrator | 2025-05-31 16:27:54.915130 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-05-31 16:27:54.915140 | orchestrator | Saturday 31 May 2025 16:26:03 +0000 (0:00:00.329) 0:00:16.271 ********** 2025-05-31 16:27:54.915151 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:27:54.915162 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:27:54.915179 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:27:54.915190 | orchestrator | 2025-05-31 16:27:54.915201 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-05-31 16:27:54.915212 | orchestrator | Saturday 31 May 2025 16:26:03 +0000 (0:00:00.313) 0:00:16.585 ********** 2025-05-31 16:27:54.915222 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:27:54.915233 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:27:54.915243 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:27:54.915254 | orchestrator | 2025-05-31 16:27:54.915265 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-05-31 16:27:54.915275 | orchestrator | Saturday 31 May 2025 16:26:03 +0000 (0:00:00.536) 0:00:17.121 ********** 2025-05-31 16:27:54.915286 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:27:54.915296 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:27:54.915329 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:27:54.915340 | orchestrator | 2025-05-31 16:27:54.915351 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-31 16:27:54.915362 | orchestrator | Saturday 31 May 2025 16:26:04 +0000 (0:00:00.362) 0:00:17.484 ********** 2025-05-31 16:27:54.915372 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:27:54.915383 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:27:54.915393 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:27:54.915404 | orchestrator | 2025-05-31 16:27:54.915414 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-05-31 16:27:54.915429 | orchestrator | Saturday 31 May 2025 16:26:04 +0000 (0:00:00.332) 0:00:17.816 ********** 2025-05-31 16:27:54.915441 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e43a14fa--64bd--59a3--8350--23173f11027f-osd--block--e43a14fa--64bd--59a3--8350--23173f11027f', 'dm-uuid-LVM-KjcMReimo5PsGyzXZpJsMXXL4dS0YiPoTecRVN1Fy57MdjoKWR3FvNsX0MOcMulr'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-31 16:27:54.915454 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--92adfeec--5c5c--5208--b88e--9a01a071247e-osd--block--92adfeec--5c5c--5208--b88e--9a01a071247e', 'dm-uuid-LVM-5L4hRZnla14bwaKnoVbRc8bkaoz3f9wwe6EmGBv2eh7By4XPR3zw4G1eX0Emizbu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-31 16:27:54.915472 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:27:54.915484 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:27:54.915495 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:27:54.915506 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:27:54.915525 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:27:54.915536 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:27:54.915552 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:27:54.915563 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:27:54.915575 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ad7aff40--0fc1--546d--9ec3--a4c69926416d-osd--block--ad7aff40--0fc1--546d--9ec3--a4c69926416d', 'dm-uuid-LVM-gSL583eMFM2i8rm1dadR9TUAB3PZW3cKY6xXChbVmHt2kudAn3jbfuE7DPOsuThG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-31 16:27:54.915605 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_934f4ed8-ee22-4056-bf57-9bb86fd501b4', 'scsi-SQEMU_QEMU_HARDDISK_934f4ed8-ee22-4056-bf57-9bb86fd501b4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_934f4ed8-ee22-4056-bf57-9bb86fd501b4-part1', 'scsi-SQEMU_QEMU_HARDDISK_934f4ed8-ee22-4056-bf57-9bb86fd501b4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_934f4ed8-ee22-4056-bf57-9bb86fd501b4-part14', 'scsi-SQEMU_QEMU_HARDDISK_934f4ed8-ee22-4056-bf57-9bb86fd501b4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_934f4ed8-ee22-4056-bf57-9bb86fd501b4-part15', 'scsi-SQEMU_QEMU_HARDDISK_934f4ed8-ee22-4056-bf57-9bb86fd501b4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_934f4ed8-ee22-4056-bf57-9bb86fd501b4-part16', 'scsi-SQEMU_QEMU_HARDDISK_934f4ed8-ee22-4056-bf57-9bb86fd501b4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 16:27:54.915619 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--02409adc--b936--5a4c--b212--7809fa63c72a-osd--block--02409adc--b936--5a4c--b212--7809fa63c72a', 'dm-uuid-LVM-GGIt5bSU2nDGA03xHpuGJYokBEYl6P8PZMjzNu1e81SWhC4M3JehoGEPbmfjWgoM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-31 16:27:54.915643 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:27:54.915655 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e43a14fa--64bd--59a3--8350--23173f11027f-osd--block--e43a14fa--64bd--59a3--8350--23173f11027f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JQswj1-y2Ph-Xuu3-deb4-fEd9-oYwM-slHKsQ', 'scsi-0QEMU_QEMU_HARDDISK_dcaed5f8-6e74-4dcd-98b8-10fdfb9502ae', 'scsi-SQEMU_QEMU_HARDDISK_dcaed5f8-6e74-4dcd-98b8-10fdfb9502ae'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 16:27:54.915691 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:27:54.915703 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:27:54.915714 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--92adfeec--5c5c--5208--b88e--9a01a071247e-osd--block--92adfeec--5c5c--5208--b88e--9a01a071247e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PSWQNF-Gqqg-V0D3-TdNi-9gjD-0Aeq-QHIN7Y', 'scsi-0QEMU_QEMU_HARDDISK_35c478e1-8eef-4047-84ea-c6dce0624e72', 'scsi-SQEMU_QEMU_HARDDISK_35c478e1-8eef-4047-84ea-c6dce0624e72'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 16:27:54.915732 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_433a7fb9-b2f1-41fe-ae71-e8a6b0aff1da', 'scsi-SQEMU_QEMU_HARDDISK_433a7fb9-b2f1-41fe-ae71-e8a6b0aff1da'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 16:27:54.915744 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:27:54.915761 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-31-15-28-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 16:27:54.915773 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:27:54.915784 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:27:54.915802 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:27:54.915813 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6a818804--e2a7--5d8b--beae--a4acf44277a5-osd--block--6a818804--e2a7--5d8b--beae--a4acf44277a5', 'dm-uuid-LVM-0jf03cHtqG7zvC3qv0JSCL53Z5bdp1zXF3YoZvYkZLEFb7aLtJQklGGjm6Z8Trxc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-31 16:27:54.915824 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:27:54.915893 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8b45f5b5--5599--560e--b955--f5f9e148b85f-osd--block--8b45f5b5--5599--560e--b955--f5f9e148b85f', 'dm-uuid-LVM-75ZA7Zi1Js7x7tZot6FcZ90qIF7l3KM08jpzcoMbfQPbddcGSUZ7j5KLc7a3a4mj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-31 16:27:54.915905 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:27:54.915925 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:27:54.915944 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec58de5b-e589-4299-a3da-23bfa2e4200b', 'scsi-SQEMU_QEMU_HARDDISK_ec58de5b-e589-4299-a3da-23bfa2e4200b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec58de5b-e589-4299-a3da-23bfa2e4200b-part1', 'scsi-SQEMU_QEMU_HARDDISK_ec58de5b-e589-4299-a3da-23bfa2e4200b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec58de5b-e589-4299-a3da-23bfa2e4200b-part14', 'scsi-SQEMU_QEMU_HARDDISK_ec58de5b-e589-4299-a3da-23bfa2e4200b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec58de5b-e589-4299-a3da-23bfa2e4200b-part15', 'scsi-SQEMU_QEMU_HARDDISK_ec58de5b-e589-4299-a3da-23bfa2e4200b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec58de5b-e589-4299-a3da-23bfa2e4200b-part16', 'scsi-SQEMU_QEMU_HARDDISK_ec58de5b-e589-4299-a3da-23bfa2e4200b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 16:27:54.915964 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:27:54.915976 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ad7aff40--0fc1--546d--9ec3--a4c69926416d-osd--block--ad7aff40--0fc1--546d--9ec3--a4c69926416d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JVV0Tk-kRsG-tLPh-LFPx-aKlE-9rbR-GOFlU8', 'scsi-0QEMU_QEMU_HARDDISK_ddb9130a-4326-47f1-9e34-5e5625e80e81', 'scsi-SQEMU_QEMU_HARDDISK_ddb9130a-4326-47f1-9e34-5e5625e80e81'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 16:27:54.915992 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:27:54.916004 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--02409adc--b936--5a4c--b212--7809fa63c72a-osd--block--02409adc--b936--5a4c--b212--7809fa63c72a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AqzP4V-1dni-XqoZ-OxyJ-fowG-XCMJ-LJnZWK', 'scsi-0QEMU_QEMU_HARDDISK_5b4bcfff-b004-43b2-aa93-003eb1863ed5', 'scsi-SQEMU_QEMU_HARDDISK_5b4bcfff-b004-43b2-aa93-003eb1863ed5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 16:27:54.916020 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:27:54.916038 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:27:54.916049 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f41ad415-f570-4f0b-8f25-7db49ff0cbfa', 'scsi-SQEMU_QEMU_HARDDISK_f41ad415-f570-4f0b-8f25-7db49ff0cbfa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 16:27:54.916061 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-31-15-28-18-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 16:27:54.916072 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:27:54.916083 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:27:54.916095 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:27:54.916112 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:27:54.916129 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c2c2026-b533-49c5-9163-99a4bbff9cf3', 'scsi-SQEMU_QEMU_HARDDISK_4c2c2026-b533-49c5-9163-99a4bbff9cf3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c2c2026-b533-49c5-9163-99a4bbff9cf3-part1', 'scsi-SQEMU_QEMU_HARDDISK_4c2c2026-b533-49c5-9163-99a4bbff9cf3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c2c2026-b533-49c5-9163-99a4bbff9cf3-part14', 'scsi-SQEMU_QEMU_HARDDISK_4c2c2026-b533-49c5-9163-99a4bbff9cf3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c2c2026-b533-49c5-9163-99a4bbff9cf3-part15', 'scsi-SQEMU_QEMU_HARDDISK_4c2c2026-b533-49c5-9163-99a4bbff9cf3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c2c2026-b533-49c5-9163-99a4bbff9cf3-part16', 'scsi-SQEMU_QEMU_HARDDISK_4c2c2026-b533-49c5-9163-99a4bbff9cf3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 16:27:54.916148 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6a818804--e2a7--5d8b--beae--a4acf44277a5-osd--block--6a818804--e2a7--5d8b--beae--a4acf44277a5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-igRBdo-6hUg-NeJy-a5BC-R1WQ-ghaE-dsu4Fq', 'scsi-0QEMU_QEMU_HARDDISK_88f31b31-d9a0-4986-b3f2-c890facc2af6', 'scsi-SQEMU_QEMU_HARDDISK_88f31b31-d9a0-4986-b3f2-c890facc2af6'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 16:27:54.916161 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8b45f5b5--5599--560e--b955--f5f9e148b85f-osd--block--8b45f5b5--5599--560e--b955--f5f9e148b85f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cBBI9B-Yzab-mAyO-KuMO-MtUG-4AUI-cM0kWu', 'scsi-0QEMU_QEMU_HARDDISK_42e25697-c4c6-4260-bea6-0d0d8bf43604', 'scsi-SQEMU_QEMU_HARDDISK_42e25697-c4c6-4260-bea6-0d0d8bf43604'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 16:27:54.916180 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_86312f09-330b-437e-8315-0e2c008d5fbe', 'scsi-SQEMU_QEMU_HARDDISK_86312f09-330b-437e-8315-0e2c008d5fbe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 16:27:54.916192 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-31-15-28-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 16:27:54.916209 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:27:54.916220 | orchestrator | 2025-05-31 16:27:54.916231 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-05-31 16:27:54.916247 | orchestrator | Saturday 31 May 2025 16:26:05 +0000 (0:00:00.604) 0:00:18.421 ********** 2025-05-31 16:27:54.916258 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-05-31 16:27:54.916269 | orchestrator | 2025-05-31 16:27:54.916280 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-05-31 16:27:54.916290 | orchestrator | Saturday 31 May 2025 16:26:06 +0000 (0:00:01.468) 0:00:19.889 ********** 2025-05-31 16:27:54.916301 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:27:54.916312 | orchestrator | 2025-05-31 16:27:54.916322 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-05-31 16:27:54.916333 | orchestrator | Saturday 31 May 2025 16:26:06 +0000 (0:00:00.162) 0:00:20.051 ********** 2025-05-31 16:27:54.916344 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:27:54.916355 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:27:54.916366 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:27:54.916375 | orchestrator | 2025-05-31 16:27:54.916385 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-05-31 16:27:54.916394 | orchestrator | Saturday 31 May 2025 16:26:07 +0000 (0:00:00.354) 0:00:20.406 ********** 2025-05-31 16:27:54.916404 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:27:54.916413 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:27:54.916423 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:27:54.916432 | orchestrator | 2025-05-31 16:27:54.916442 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-05-31 16:27:54.916451 | orchestrator | Saturday 31 May 2025 16:26:07 +0000 (0:00:00.682) 0:00:21.088 ********** 2025-05-31 16:27:54.916461 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:27:54.916470 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:27:54.916480 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:27:54.916489 | orchestrator | 2025-05-31 16:27:54.916499 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-31 16:27:54.916508 | orchestrator | Saturday 31 May 2025 16:26:08 +0000 (0:00:00.309) 0:00:21.397 ********** 2025-05-31 16:27:54.916518 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:27:54.916527 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:27:54.916537 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:27:54.916546 | orchestrator | 2025-05-31 16:27:54.916555 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-31 16:27:54.916565 | orchestrator | Saturday 31 May 2025 16:26:09 +0000 (0:00:00.881) 0:00:22.279 ********** 2025-05-31 16:27:54.916575 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:27:54.916584 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:27:54.916593 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:27:54.916603 | orchestrator | 2025-05-31 16:27:54.916613 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-31 16:27:54.916622 | orchestrator | Saturday 31 May 2025 16:26:09 +0000 (0:00:00.302) 0:00:22.582 ********** 2025-05-31 16:27:54.916632 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:27:54.916641 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:27:54.916650 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:27:54.916660 | orchestrator | 2025-05-31 16:27:54.916669 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-31 16:27:54.916679 | orchestrator | Saturday 31 May 2025 16:26:09 +0000 (0:00:00.438) 0:00:23.021 ********** 2025-05-31 16:27:54.916688 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:27:54.916698 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:27:54.916707 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:27:54.916722 | orchestrator | 2025-05-31 16:27:54.916732 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-05-31 16:27:54.916741 | orchestrator | Saturday 31 May 2025 16:26:10 +0000 (0:00:00.325) 0:00:23.346 ********** 2025-05-31 16:27:54.916751 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-31 16:27:54.916760 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-31 16:27:54.916770 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-31 16:27:54.916780 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-31 16:27:54.916789 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:27:54.916798 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-31 16:27:54.916808 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-31 16:27:54.916817 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-31 16:27:54.916826 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-31 16:27:54.916853 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:27:54.916863 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-31 16:27:54.916872 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:27:54.916882 | orchestrator | 2025-05-31 16:27:54.916891 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-05-31 16:27:54.916906 | orchestrator | Saturday 31 May 2025 16:26:11 +0000 (0:00:01.112) 0:00:24.459 ********** 2025-05-31 16:27:54.916916 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-31 16:27:54.916925 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-31 16:27:54.916935 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-31 16:27:54.916944 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-31 16:27:54.916954 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-31 16:27:54.916963 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-31 16:27:54.916972 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-31 16:27:54.916982 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:27:54.916991 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-31 16:27:54.917000 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-31 16:27:54.917010 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:27:54.917019 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:27:54.917028 | orchestrator | 2025-05-31 16:27:54.917038 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-05-31 16:27:54.917047 | orchestrator | Saturday 31 May 2025 16:26:12 +0000 (0:00:00.929) 0:00:25.389 ********** 2025-05-31 16:27:54.917074 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-05-31 16:27:54.917085 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-05-31 16:27:54.917095 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-05-31 16:27:54.917104 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-05-31 16:27:54.917114 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-05-31 16:27:54.917123 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-05-31 16:27:54.917133 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-05-31 16:27:54.917142 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-05-31 16:27:54.917151 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-05-31 16:27:54.917161 | orchestrator | 2025-05-31 16:27:54.917170 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-05-31 16:27:54.917180 | orchestrator | Saturday 31 May 2025 16:26:14 +0000 (0:00:01.964) 0:00:27.353 ********** 2025-05-31 16:27:54.917190 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-31 16:27:54.917199 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-31 16:27:54.917209 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-31 16:27:54.917222 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:27:54.917232 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-31 16:27:54.917241 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-31 16:27:54.917251 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-31 16:27:54.917261 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:27:54.917270 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-31 16:27:54.917279 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-31 16:27:54.917289 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-31 16:27:54.917298 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:27:54.917307 | orchestrator | 2025-05-31 16:27:54.917317 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-05-31 16:27:54.917327 | orchestrator | Saturday 31 May 2025 16:26:14 +0000 (0:00:00.523) 0:00:27.877 ********** 2025-05-31 16:27:54.917336 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-31 16:27:54.917346 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-31 16:27:54.917355 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-31 16:27:54.917364 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-31 16:27:54.917374 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:27:54.917383 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-31 16:27:54.917393 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-31 16:27:54.917402 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:27:54.917412 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-31 16:27:54.917421 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-31 16:27:54.917430 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-31 16:27:54.917440 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:27:54.917449 | orchestrator | 2025-05-31 16:27:54.917459 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-05-31 16:27:54.917468 | orchestrator | Saturday 31 May 2025 16:26:14 +0000 (0:00:00.317) 0:00:28.194 ********** 2025-05-31 16:27:54.917478 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-31 16:27:54.917488 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-31 16:27:54.917498 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-31 16:27:54.917507 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-31 16:27:54.917516 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-31 16:27:54.917526 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-31 16:27:54.917535 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:27:54.917545 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:27:54.917554 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-31 16:27:54.917570 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-31 16:27:54.917580 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-31 16:27:54.917589 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:27:54.917599 | orchestrator | 2025-05-31 16:27:54.917608 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-05-31 16:27:54.917618 | orchestrator | Saturday 31 May 2025 16:26:15 +0000 (0:00:00.331) 0:00:28.525 ********** 2025-05-31 16:27:54.917628 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:27:54.917652 | orchestrator | 2025-05-31 16:27:54.917662 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-31 16:27:54.917671 | orchestrator | Saturday 31 May 2025 16:26:15 +0000 (0:00:00.534) 0:00:29.059 ********** 2025-05-31 16:27:54.917681 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:27:54.917690 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:27:54.917700 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:27:54.917709 | orchestrator | 2025-05-31 16:27:54.917719 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-31 16:27:54.917733 | orchestrator | Saturday 31 May 2025 16:26:16 +0000 (0:00:00.367) 0:00:29.427 ********** 2025-05-31 16:27:54.917743 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:27:54.917752 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:27:54.917762 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:27:54.917771 | orchestrator | 2025-05-31 16:27:54.917781 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-31 16:27:54.917790 | orchestrator | Saturday 31 May 2025 16:26:16 +0000 (0:00:00.365) 0:00:29.792 ********** 2025-05-31 16:27:54.917800 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:27:54.917809 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:27:54.917818 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:27:54.917844 | orchestrator | 2025-05-31 16:27:54.917854 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-31 16:27:54.917864 | orchestrator | Saturday 31 May 2025 16:26:16 +0000 (0:00:00.343) 0:00:30.135 ********** 2025-05-31 16:27:54.917874 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:27:54.917883 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:27:54.917892 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:27:54.917902 | orchestrator | 2025-05-31 16:27:54.917911 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-31 16:27:54.917921 | orchestrator | Saturday 31 May 2025 16:26:17 +0000 (0:00:00.605) 0:00:30.741 ********** 2025-05-31 16:27:54.917930 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 16:27:54.917940 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 16:27:54.917949 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 16:27:54.917959 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:27:54.917968 | orchestrator | 2025-05-31 16:27:54.917978 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-31 16:27:54.917987 | orchestrator | Saturday 31 May 2025 16:26:17 +0000 (0:00:00.365) 0:00:31.106 ********** 2025-05-31 16:27:54.917997 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 16:27:54.918006 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 16:27:54.918041 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 16:27:54.918053 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:27:54.918062 | orchestrator | 2025-05-31 16:27:54.918072 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-31 16:27:54.918081 | orchestrator | Saturday 31 May 2025 16:26:18 +0000 (0:00:00.308) 0:00:31.414 ********** 2025-05-31 16:27:54.918091 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 16:27:54.918100 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 16:27:54.918110 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 16:27:54.918119 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:27:54.918129 | orchestrator | 2025-05-31 16:27:54.918138 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-31 16:27:54.918147 | orchestrator | Saturday 31 May 2025 16:26:18 +0000 (0:00:00.343) 0:00:31.758 ********** 2025-05-31 16:27:54.918157 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:27:54.918167 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:27:54.918176 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:27:54.918193 | orchestrator | 2025-05-31 16:27:54.918202 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-31 16:27:54.918212 | orchestrator | Saturday 31 May 2025 16:26:18 +0000 (0:00:00.359) 0:00:32.117 ********** 2025-05-31 16:27:54.918221 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-31 16:27:54.918231 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-31 16:27:54.918240 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-31 16:27:54.918249 | orchestrator | 2025-05-31 16:27:54.918259 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-31 16:27:54.918268 | orchestrator | Saturday 31 May 2025 16:26:19 +0000 (0:00:00.571) 0:00:32.688 ********** 2025-05-31 16:27:54.918278 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:27:54.918287 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:27:54.918296 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:27:54.918306 | orchestrator | 2025-05-31 16:27:54.918315 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-31 16:27:54.918325 | orchestrator | Saturday 31 May 2025 16:26:19 +0000 (0:00:00.393) 0:00:33.081 ********** 2025-05-31 16:27:54.918334 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:27:54.918343 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:27:54.918353 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:27:54.918362 | orchestrator | 2025-05-31 16:27:54.918372 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-31 16:27:54.918387 | orchestrator | Saturday 31 May 2025 16:26:20 +0000 (0:00:00.290) 0:00:33.371 ********** 2025-05-31 16:27:54.918397 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-31 16:27:54.918406 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:27:54.918416 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-31 16:27:54.918425 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:27:54.918434 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-31 16:27:54.918444 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:27:54.918453 | orchestrator | 2025-05-31 16:27:54.918462 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-31 16:27:54.918472 | orchestrator | Saturday 31 May 2025 16:26:20 +0000 (0:00:00.388) 0:00:33.759 ********** 2025-05-31 16:27:54.918482 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-31 16:27:54.918491 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:27:54.918501 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-31 16:27:54.918511 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:27:54.918520 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-31 16:27:54.918535 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:27:54.918544 | orchestrator | 2025-05-31 16:27:54.918554 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-31 16:27:54.918563 | orchestrator | Saturday 31 May 2025 16:26:20 +0000 (0:00:00.246) 0:00:34.006 ********** 2025-05-31 16:27:54.918573 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 16:27:54.918582 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 16:27:54.918591 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-31 16:27:54.918600 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-31 16:27:54.918610 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 16:27:54.918619 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:27:54.918628 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-31 16:27:54.918638 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-31 16:27:54.918647 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-31 16:27:54.918662 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:27:54.918672 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-31 16:27:54.918681 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:27:54.918690 | orchestrator | 2025-05-31 16:27:54.918700 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-05-31 16:27:54.918709 | orchestrator | Saturday 31 May 2025 16:26:21 +0000 (0:00:00.757) 0:00:34.763 ********** 2025-05-31 16:27:54.918719 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:27:54.918728 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:27:54.918738 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:27:54.918747 | orchestrator | 2025-05-31 16:27:54.918756 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-05-31 16:27:54.918765 | orchestrator | Saturday 31 May 2025 16:26:21 +0000 (0:00:00.246) 0:00:35.010 ********** 2025-05-31 16:27:54.918775 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-31 16:27:54.918784 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-31 16:27:54.918794 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-31 16:27:54.918803 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-05-31 16:27:54.918813 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-31 16:27:54.918822 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-31 16:27:54.918873 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-31 16:27:54.918884 | orchestrator | 2025-05-31 16:27:54.918893 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-05-31 16:27:54.918903 | orchestrator | Saturday 31 May 2025 16:26:22 +0000 (0:00:00.782) 0:00:35.793 ********** 2025-05-31 16:27:54.918912 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-31 16:27:54.918921 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-31 16:27:54.918931 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-31 16:27:54.918940 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-05-31 16:27:54.918950 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-31 16:27:54.918959 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-31 16:27:54.918969 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-31 16:27:54.918978 | orchestrator | 2025-05-31 16:27:54.918987 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-05-31 16:27:54.918997 | orchestrator | Saturday 31 May 2025 16:26:23 +0000 (0:00:01.275) 0:00:37.068 ********** 2025-05-31 16:27:54.919006 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:27:54.919016 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:27:54.919025 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-05-31 16:27:54.919035 | orchestrator | 2025-05-31 16:27:54.919044 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-05-31 16:27:54.919059 | orchestrator | Saturday 31 May 2025 16:26:24 +0000 (0:00:00.532) 0:00:37.600 ********** 2025-05-31 16:27:54.919071 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-31 16:27:54.919083 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-31 16:27:54.919099 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-31 16:27:54.919114 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-31 16:27:54.919124 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-31 16:27:54.919134 | orchestrator | 2025-05-31 16:27:54.919143 | orchestrator | TASK [generate keys] *********************************************************** 2025-05-31 16:27:54.919153 | orchestrator | Saturday 31 May 2025 16:27:05 +0000 (0:00:41.164) 0:01:18.764 ********** 2025-05-31 16:27:54.919162 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 16:27:54.919172 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 16:27:54.919181 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 16:27:54.919191 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 16:27:54.919200 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 16:27:54.919210 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 16:27:54.919219 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-05-31 16:27:54.919229 | orchestrator | 2025-05-31 16:27:54.919238 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-05-31 16:27:54.919247 | orchestrator | Saturday 31 May 2025 16:27:25 +0000 (0:00:20.063) 0:01:38.828 ********** 2025-05-31 16:27:54.919257 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 16:27:54.919266 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 16:27:54.919275 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 16:27:54.919285 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 16:27:54.919294 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 16:27:54.919304 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 16:27:54.919313 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-31 16:27:54.919323 | orchestrator | 2025-05-31 16:27:54.919332 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-05-31 16:27:54.919342 | orchestrator | Saturday 31 May 2025 16:27:35 +0000 (0:00:09.879) 0:01:48.708 ********** 2025-05-31 16:27:54.919351 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 16:27:54.919361 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-31 16:27:54.919370 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-31 16:27:54.919379 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 16:27:54.919389 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-31 16:27:54.919397 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-31 16:27:54.919404 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 16:27:54.919417 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-31 16:27:54.919425 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-31 16:27:54.919432 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 16:27:54.919440 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-31 16:27:54.919452 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-31 16:27:54.919460 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 16:27:54.919468 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-31 16:27:54.919476 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-31 16:27:54.919484 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 16:27:54.919492 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-31 16:27:54.919499 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-31 16:27:54.919507 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-05-31 16:27:54.919515 | orchestrator | 2025-05-31 16:27:54.919523 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:27:54.919531 | orchestrator | testbed-node-3 : ok=30  changed=2  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-05-31 16:27:54.919544 | orchestrator | testbed-node-4 : ok=20  changed=0 unreachable=0 failed=0 skipped=30  rescued=0 ignored=0 2025-05-31 16:27:54.919552 | orchestrator | testbed-node-5 : ok=25  changed=3  unreachable=0 failed=0 skipped=29  rescued=0 ignored=0 2025-05-31 16:27:54.919560 | orchestrator | 2025-05-31 16:27:54.919568 | orchestrator | 2025-05-31 16:27:54.919576 | orchestrator | 2025-05-31 16:27:54.919583 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 16:27:54.919591 | orchestrator | Saturday 31 May 2025 16:27:54 +0000 (0:00:18.895) 0:02:07.603 ********** 2025-05-31 16:27:54.919599 | orchestrator | =============================================================================== 2025-05-31 16:27:54.919607 | orchestrator | create openstack pool(s) ----------------------------------------------- 41.16s 2025-05-31 16:27:54.919614 | orchestrator | generate keys ---------------------------------------------------------- 20.06s 2025-05-31 16:27:54.919622 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.90s 2025-05-31 16:27:54.919630 | orchestrator | get keys from monitors -------------------------------------------------- 9.88s 2025-05-31 16:27:54.919637 | orchestrator | ceph-facts : find a running mon container ------------------------------- 2.31s 2025-05-31 16:27:54.919645 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 1.96s 2025-05-31 16:27:54.919653 | orchestrator | ceph-facts : get ceph current status ------------------------------------ 1.47s 2025-05-31 16:27:54.919661 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.37s 2025-05-31 16:27:54.919668 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 1.28s 2025-05-31 16:27:54.919676 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 1.11s 2025-05-31 16:27:54.919684 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 0.93s 2025-05-31 16:27:54.919692 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.88s 2025-05-31 16:27:54.919699 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.86s 2025-05-31 16:27:54.919707 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.81s 2025-05-31 16:27:54.919715 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 0.78s 2025-05-31 16:27:54.919727 | orchestrator | ceph-facts : set_fact rgw_instances_all --------------------------------- 0.76s 2025-05-31 16:27:54.919735 | orchestrator | ceph-facts : check if it is atomic host --------------------------------- 0.72s 2025-05-31 16:27:54.919743 | orchestrator | ceph-facts : check if the ceph conf exists ------------------------------ 0.68s 2025-05-31 16:27:54.919751 | orchestrator | ceph-facts : include facts.yml ------------------------------------------ 0.68s 2025-05-31 16:27:54.919758 | orchestrator | ceph-facts : set_fact monitor_name ansible_facts['hostname'] ------------ 0.65s 2025-05-31 16:27:54.919766 | orchestrator | 2025-05-31 16:27:54 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:27:54.919774 | orchestrator | 2025-05-31 16:27:54 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:27:57.962320 | orchestrator | 2025-05-31 16:27:57 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:27:57.964110 | orchestrator | 2025-05-31 16:27:57 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:27:57.965607 | orchestrator | 2025-05-31 16:27:57 | INFO  | Task 22e70a91-c6ef-4b6f-ba5f-bc702da9c6f2 is in state STARTED 2025-05-31 16:27:57.965986 | orchestrator | 2025-05-31 16:27:57 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:28:01.019285 | orchestrator | 2025-05-31 16:28:01 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:28:01.023021 | orchestrator | 2025-05-31 16:28:01 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:28:01.024162 | orchestrator | 2025-05-31 16:28:01 | INFO  | Task 22e70a91-c6ef-4b6f-ba5f-bc702da9c6f2 is in state STARTED 2025-05-31 16:28:01.024196 | orchestrator | 2025-05-31 16:28:01 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:28:04.073990 | orchestrator | 2025-05-31 16:28:04 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:28:04.075344 | orchestrator | 2025-05-31 16:28:04 | INFO  | Task 7ff72eab-42ef-45c5-8385-1439e30388a8 is in state STARTED 2025-05-31 16:28:04.078223 | orchestrator | 2025-05-31 16:28:04 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:28:04.083232 | orchestrator | 2025-05-31 16:28:04 | INFO  | Task 22e70a91-c6ef-4b6f-ba5f-bc702da9c6f2 is in state STARTED 2025-05-31 16:28:04.083274 | orchestrator | 2025-05-31 16:28:04 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:28:07.134823 | orchestrator | 2025-05-31 16:28:07 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:28:07.135544 | orchestrator | 2025-05-31 16:28:07 | INFO  | Task 7ff72eab-42ef-45c5-8385-1439e30388a8 is in state STARTED 2025-05-31 16:28:07.136648 | orchestrator | 2025-05-31 16:28:07 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:28:07.137760 | orchestrator | 2025-05-31 16:28:07 | INFO  | Task 22e70a91-c6ef-4b6f-ba5f-bc702da9c6f2 is in state STARTED 2025-05-31 16:28:07.137791 | orchestrator | 2025-05-31 16:28:07 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:28:10.188346 | orchestrator | 2025-05-31 16:28:10 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:28:10.189507 | orchestrator | 2025-05-31 16:28:10 | INFO  | Task 7ff72eab-42ef-45c5-8385-1439e30388a8 is in state STARTED 2025-05-31 16:28:10.190767 | orchestrator | 2025-05-31 16:28:10 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:28:10.193007 | orchestrator | 2025-05-31 16:28:10 | INFO  | Task 22e70a91-c6ef-4b6f-ba5f-bc702da9c6f2 is in state STARTED 2025-05-31 16:28:10.193050 | orchestrator | 2025-05-31 16:28:10 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:28:13.249934 | orchestrator | 2025-05-31 16:28:13 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:28:13.251427 | orchestrator | 2025-05-31 16:28:13 | INFO  | Task 7ff72eab-42ef-45c5-8385-1439e30388a8 is in state STARTED 2025-05-31 16:28:13.254177 | orchestrator | 2025-05-31 16:28:13 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:28:13.255894 | orchestrator | 2025-05-31 16:28:13 | INFO  | Task 22e70a91-c6ef-4b6f-ba5f-bc702da9c6f2 is in state STARTED 2025-05-31 16:28:13.255941 | orchestrator | 2025-05-31 16:28:13 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:28:16.295681 | orchestrator | 2025-05-31 16:28:16 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:28:16.299095 | orchestrator | 2025-05-31 16:28:16 | INFO  | Task 7ff72eab-42ef-45c5-8385-1439e30388a8 is in state STARTED 2025-05-31 16:28:16.303432 | orchestrator | 2025-05-31 16:28:16 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:28:16.306071 | orchestrator | 2025-05-31 16:28:16 | INFO  | Task 22e70a91-c6ef-4b6f-ba5f-bc702da9c6f2 is in state STARTED 2025-05-31 16:28:16.306115 | orchestrator | 2025-05-31 16:28:16 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:28:19.358578 | orchestrator | 2025-05-31 16:28:19 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:28:19.360092 | orchestrator | 2025-05-31 16:28:19 | INFO  | Task 7ff72eab-42ef-45c5-8385-1439e30388a8 is in state STARTED 2025-05-31 16:28:19.365416 | orchestrator | 2025-05-31 16:28:19 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:28:19.365451 | orchestrator | 2025-05-31 16:28:19 | INFO  | Task 22e70a91-c6ef-4b6f-ba5f-bc702da9c6f2 is in state STARTED 2025-05-31 16:28:19.365467 | orchestrator | 2025-05-31 16:28:19 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:28:22.419237 | orchestrator | 2025-05-31 16:28:22 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:28:22.419339 | orchestrator | 2025-05-31 16:28:22 | INFO  | Task 7ff72eab-42ef-45c5-8385-1439e30388a8 is in state STARTED 2025-05-31 16:28:22.419990 | orchestrator | 2025-05-31 16:28:22 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:28:22.421251 | orchestrator | 2025-05-31 16:28:22 | INFO  | Task 22e70a91-c6ef-4b6f-ba5f-bc702da9c6f2 is in state STARTED 2025-05-31 16:28:22.421273 | orchestrator | 2025-05-31 16:28:22 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:28:25.474123 | orchestrator | 2025-05-31 16:28:25 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:28:25.475295 | orchestrator | 2025-05-31 16:28:25 | INFO  | Task 7ff72eab-42ef-45c5-8385-1439e30388a8 is in state STARTED 2025-05-31 16:28:25.476917 | orchestrator | 2025-05-31 16:28:25 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:28:25.477422 | orchestrator | 2025-05-31 16:28:25 | INFO  | Task 22e70a91-c6ef-4b6f-ba5f-bc702da9c6f2 is in state STARTED 2025-05-31 16:28:25.477453 | orchestrator | 2025-05-31 16:28:25 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:28:28.528390 | orchestrator | 2025-05-31 16:28:28 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:28:28.529732 | orchestrator | 2025-05-31 16:28:28 | INFO  | Task 7ff72eab-42ef-45c5-8385-1439e30388a8 is in state STARTED 2025-05-31 16:28:28.531106 | orchestrator | 2025-05-31 16:28:28 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state STARTED 2025-05-31 16:28:28.532644 | orchestrator | 2025-05-31 16:28:28 | INFO  | Task 22e70a91-c6ef-4b6f-ba5f-bc702da9c6f2 is in state STARTED 2025-05-31 16:28:28.532728 | orchestrator | 2025-05-31 16:28:28 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:28:31.594495 | orchestrator | 2025-05-31 16:28:31 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:28:31.595522 | orchestrator | 2025-05-31 16:28:31 | INFO  | Task a01bcbfd-0d93-4aaa-8dc4-0df16e59d887 is in state STARTED 2025-05-31 16:28:31.595800 | orchestrator | 2025-05-31 16:28:31 | INFO  | Task 7ff72eab-42ef-45c5-8385-1439e30388a8 is in state STARTED 2025-05-31 16:28:31.596778 | orchestrator | 2025-05-31 16:28:31 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:28:31.598360 | orchestrator | 2025-05-31 16:28:31 | INFO  | Task 4771e799-a528-4d72-a7e9-8e8e95288e8f is in state SUCCESS 2025-05-31 16:28:31.600053 | orchestrator | 2025-05-31 16:28:31.600092 | orchestrator | 2025-05-31 16:28:31.600105 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-31 16:28:31.600116 | orchestrator | 2025-05-31 16:28:31.600127 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-31 16:28:31.600139 | orchestrator | Saturday 31 May 2025 16:25:56 +0000 (0:00:00.290) 0:00:00.290 ********** 2025-05-31 16:28:31.600150 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:28:31.600692 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:28:31.600714 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:28:31.600725 | orchestrator | 2025-05-31 16:28:31.600736 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-31 16:28:31.600748 | orchestrator | Saturday 31 May 2025 16:25:56 +0000 (0:00:00.364) 0:00:00.655 ********** 2025-05-31 16:28:31.600759 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-05-31 16:28:31.600770 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-05-31 16:28:31.600781 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-05-31 16:28:31.600791 | orchestrator | 2025-05-31 16:28:31.600802 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-05-31 16:28:31.600813 | orchestrator | 2025-05-31 16:28:31.600851 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-31 16:28:31.600864 | orchestrator | Saturday 31 May 2025 16:25:57 +0000 (0:00:00.301) 0:00:00.957 ********** 2025-05-31 16:28:31.600875 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:28:31.600887 | orchestrator | 2025-05-31 16:28:31.600897 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-05-31 16:28:31.600908 | orchestrator | Saturday 31 May 2025 16:25:57 +0000 (0:00:00.725) 0:00:01.682 ********** 2025-05-31 16:28:31.600926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-31 16:28:31.600944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-31 16:28:31.601070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-31 16:28:31.601090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-31 16:28:31.601103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-31 16:28:31.601114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-31 16:28:31.601125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-31 16:28:31.601151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-31 16:28:31.601163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-31 16:28:31.601183 | orchestrator | 2025-05-31 16:28:31.601202 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-05-31 16:28:31.601230 | orchestrator | Saturday 31 May 2025 16:26:00 +0000 (0:00:02.368) 0:00:04.051 ********** 2025-05-31 16:28:31.601249 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-05-31 16:28:31.601267 | orchestrator | 2025-05-31 16:28:31.601287 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-05-31 16:28:31.601304 | orchestrator | Saturday 31 May 2025 16:26:00 +0000 (0:00:00.516) 0:00:04.567 ********** 2025-05-31 16:28:31.601323 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:28:31.601344 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:28:31.601363 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:28:31.601383 | orchestrator | 2025-05-31 16:28:31.601397 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-05-31 16:28:31.601408 | orchestrator | Saturday 31 May 2025 16:26:01 +0000 (0:00:00.473) 0:00:05.041 ********** 2025-05-31 16:28:31.601421 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-31 16:28:31.601434 | orchestrator | 2025-05-31 16:28:31.601446 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-31 16:28:31.601458 | orchestrator | Saturday 31 May 2025 16:26:01 +0000 (0:00:00.352) 0:00:05.393 ********** 2025-05-31 16:28:31.601470 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:28:31.601482 | orchestrator | 2025-05-31 16:28:31.601494 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-05-31 16:28:31.601506 | orchestrator | Saturday 31 May 2025 16:26:02 +0000 (0:00:00.634) 0:00:06.028 ********** 2025-05-31 16:28:31.601519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-31 16:28:31.601556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-31 16:28:31.601588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-31 16:28:31.601603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-31 16:28:31.601616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-31 16:28:31.601719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-31 16:28:31.601734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-31 16:28:31.601745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-31 16:28:31.601764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-31 16:28:31.601775 | orchestrator | 2025-05-31 16:28:31.601786 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-05-31 16:28:31.601797 | orchestrator | Saturday 31 May 2025 16:26:05 +0000 (0:00:03.381) 0:00:09.410 ********** 2025-05-31 16:28:31.601818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-31 16:28:31.601884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-31 16:28:31.601949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-31 16:28:31.601962 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:28:31.601974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-31 16:28:31.601992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-31 16:28:31.602073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-31 16:28:31.602107 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:28:31.602128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-31 16:28:31.602160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-31 16:28:31.602180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-31 16:28:31.602198 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:28:31.602217 | orchestrator | 2025-05-31 16:28:31.602236 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-05-31 16:28:31.602254 | orchestrator | Saturday 31 May 2025 16:26:06 +0000 (0:00:00.854) 0:00:10.264 ********** 2025-05-31 16:28:31.602278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-31 16:28:31.602304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-31 16:28:31.602316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-31 16:28:31.602334 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:28:31.602349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-31 16:28:31.602362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-31 16:28:31.602379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-31 16:28:31.602391 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:28:31.602413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-31 16:28:31.602556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-31 16:28:31.602573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-31 16:28:31.602586 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:28:31.602598 | orchestrator | 2025-05-31 16:28:31.602610 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-05-31 16:28:31.602622 | orchestrator | Saturday 31 May 2025 16:26:07 +0000 (0:00:01.120) 0:00:11.385 ********** 2025-05-31 16:28:31.602698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-31 16:28:31.602722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-31 16:28:31.602744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-31 16:28:31.602765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-31 16:28:31.602776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-31 16:28:31.602788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-31 16:28:31.602886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-31 16:28:31.602903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-31 16:28:31.602923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-31 16:28:31.602942 | orchestrator | 2025-05-31 16:28:31.602953 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-05-31 16:28:31.602964 | orchestrator | Saturday 31 May 2025 16:26:10 +0000 (0:00:03.414) 0:00:14.800 ********** 2025-05-31 16:28:31.602976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-31 16:28:31.602986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-31 16:28:31.603001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-31 16:28:31.603012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-31 16:28:31.603035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-31 16:28:31.603055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-31 16:28:31.603068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-31 16:28:31.603078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-31 16:28:31.603092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-31 16:28:31.603102 | orchestrator | 2025-05-31 16:28:31.603112 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-05-31 16:28:31.603122 | orchestrator | Saturday 31 May 2025 16:26:18 +0000 (0:00:07.463) 0:00:22.263 ********** 2025-05-31 16:28:31.603138 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:28:31.603148 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:28:31.603158 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:28:31.603167 | orchestrator | 2025-05-31 16:28:31.603177 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-05-31 16:28:31.603186 | orchestrator | Saturday 31 May 2025 16:26:20 +0000 (0:00:02.223) 0:00:24.486 ********** 2025-05-31 16:28:31.603196 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:28:31.603205 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:28:31.603215 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:28:31.603224 | orchestrator | 2025-05-31 16:28:31.603240 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-05-31 16:28:31.603250 | orchestrator | Saturday 31 May 2025 16:26:21 +0000 (0:00:00.995) 0:00:25.482 ********** 2025-05-31 16:28:31.603259 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:28:31.603269 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:28:31.603278 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:28:31.603287 | orchestrator | 2025-05-31 16:28:31.603297 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-05-31 16:28:31.603306 | orchestrator | Saturday 31 May 2025 16:26:22 +0000 (0:00:00.567) 0:00:26.050 ********** 2025-05-31 16:28:31.603316 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:28:31.603325 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:28:31.603335 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:28:31.603344 | orchestrator | 2025-05-31 16:28:31.603354 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-05-31 16:28:31.603363 | orchestrator | Saturday 31 May 2025 16:26:22 +0000 (0:00:00.329) 0:00:26.379 ********** 2025-05-31 16:28:31.603374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-31 16:28:31.603384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-31 16:28:31.603399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-31 16:28:31.603416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-31 16:28:31.603433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-31 16:28:31.603444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-31 16:28:31.603454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-31 16:28:31.603464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-31 16:28:31.603501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-31 16:28:31.603513 | orchestrator | 2025-05-31 16:28:31.603522 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-31 16:28:31.603532 | orchestrator | Saturday 31 May 2025 16:26:24 +0000 (0:00:02.221) 0:00:28.600 ********** 2025-05-31 16:28:31.603542 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:28:31.603551 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:28:31.603561 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:28:31.603570 | orchestrator | 2025-05-31 16:28:31.603580 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-05-31 16:28:31.603590 | orchestrator | Saturday 31 May 2025 16:26:25 +0000 (0:00:00.365) 0:00:28.966 ********** 2025-05-31 16:28:31.603599 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-31 16:28:31.603609 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-31 16:28:31.603625 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-31 16:28:31.603635 | orchestrator | 2025-05-31 16:28:31.603644 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-05-31 16:28:31.603654 | orchestrator | Saturday 31 May 2025 16:26:26 +0000 (0:00:01.760) 0:00:30.726 ********** 2025-05-31 16:28:31.603663 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-31 16:28:31.603673 | orchestrator | 2025-05-31 16:28:31.603682 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-05-31 16:28:31.603692 | orchestrator | Saturday 31 May 2025 16:26:27 +0000 (0:00:00.572) 0:00:31.298 ********** 2025-05-31 16:28:31.603701 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:28:31.603711 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:28:31.603720 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:28:31.603730 | orchestrator | 2025-05-31 16:28:31.603739 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-05-31 16:28:31.603749 | orchestrator | Saturday 31 May 2025 16:26:28 +0000 (0:00:00.965) 0:00:32.263 ********** 2025-05-31 16:28:31.603758 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-31 16:28:31.603768 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-31 16:28:31.603777 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-31 16:28:31.603787 | orchestrator | 2025-05-31 16:28:31.603796 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-05-31 16:28:31.603806 | orchestrator | Saturday 31 May 2025 16:26:29 +0000 (0:00:01.035) 0:00:33.299 ********** 2025-05-31 16:28:31.603815 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:28:31.603843 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:28:31.603853 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:28:31.603863 | orchestrator | 2025-05-31 16:28:31.603873 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-05-31 16:28:31.603882 | orchestrator | Saturday 31 May 2025 16:26:29 +0000 (0:00:00.263) 0:00:33.562 ********** 2025-05-31 16:28:31.603892 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-31 16:28:31.603901 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-31 16:28:31.603911 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-31 16:28:31.603927 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-31 16:28:31.603937 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-31 16:28:31.603947 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-31 16:28:31.603956 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-31 16:28:31.603966 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-31 16:28:31.603976 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-31 16:28:31.603985 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-31 16:28:31.603995 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-31 16:28:31.604004 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-31 16:28:31.604014 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-31 16:28:31.604023 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-31 16:28:31.604033 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-31 16:28:31.604042 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-31 16:28:31.604052 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-31 16:28:31.604061 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-31 16:28:31.604071 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-31 16:28:31.604099 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-31 16:28:31.604109 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-31 16:28:31.604119 | orchestrator | 2025-05-31 16:28:31.604128 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-05-31 16:28:31.604138 | orchestrator | Saturday 31 May 2025 16:26:40 +0000 (0:00:10.658) 0:00:44.220 ********** 2025-05-31 16:28:31.604220 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-31 16:28:31.604231 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-31 16:28:31.604240 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-31 16:28:31.604250 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-31 16:28:31.604259 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-31 16:28:31.604276 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-31 16:28:31.604286 | orchestrator | 2025-05-31 16:28:31.604295 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-05-31 16:28:31.604304 | orchestrator | Saturday 31 May 2025 16:26:43 +0000 (0:00:03.040) 0:00:47.261 ********** 2025-05-31 16:28:31.604315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-31 16:28:31.604337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-31 16:28:31.604354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-31 16:28:31.604365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-31 16:28:31.604383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-31 16:28:31.604394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-31 16:28:31.604410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-31 16:28:31.604420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-31 16:28:31.604430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-31 16:28:31.604440 | orchestrator | 2025-05-31 16:28:31.604449 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-31 16:28:31.604459 | orchestrator | Saturday 31 May 2025 16:26:46 +0000 (0:00:02.942) 0:00:50.203 ********** 2025-05-31 16:28:31.604468 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:28:31.604478 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:28:31.604487 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:28:31.604497 | orchestrator | 2025-05-31 16:28:31.604511 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-05-31 16:28:31.604520 | orchestrator | Saturday 31 May 2025 16:26:46 +0000 (0:00:00.263) 0:00:50.467 ********** 2025-05-31 16:28:31.604530 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:28:31.604539 | orchestrator | 2025-05-31 16:28:31.604549 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-05-31 16:28:31.604558 | orchestrator | Saturday 31 May 2025 16:26:49 +0000 (0:00:02.592) 0:00:53.060 ********** 2025-05-31 16:28:31.604567 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:28:31.604577 | orchestrator | 2025-05-31 16:28:31.604586 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-05-31 16:28:31.604595 | orchestrator | Saturday 31 May 2025 16:26:51 +0000 (0:00:02.314) 0:00:55.374 ********** 2025-05-31 16:28:31.604605 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:28:31.604617 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:28:31.604635 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:28:31.604645 | orchestrator | 2025-05-31 16:28:31.604660 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-05-31 16:28:31.604670 | orchestrator | Saturday 31 May 2025 16:26:52 +0000 (0:00:00.898) 0:00:56.273 ********** 2025-05-31 16:28:31.604680 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:28:31.604694 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:28:31.604705 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:28:31.604714 | orchestrator | 2025-05-31 16:28:31.604724 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-05-31 16:28:31.604733 | orchestrator | Saturday 31 May 2025 16:26:52 +0000 (0:00:00.363) 0:00:56.637 ********** 2025-05-31 16:28:31.604742 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:28:31.604752 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:28:31.604762 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:28:31.604771 | orchestrator | 2025-05-31 16:28:31.604781 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-05-31 16:28:31.604790 | orchestrator | Saturday 31 May 2025 16:26:53 +0000 (0:00:00.523) 0:00:57.161 ********** 2025-05-31 16:28:31.604799 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:28:31.604809 | orchestrator | 2025-05-31 16:28:31.604818 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-05-31 16:28:31.604845 | orchestrator | Saturday 31 May 2025 16:27:06 +0000 (0:00:12.901) 0:01:10.063 ********** 2025-05-31 16:28:31.604854 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:28:31.604864 | orchestrator | 2025-05-31 16:28:31.604873 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-31 16:28:31.604883 | orchestrator | Saturday 31 May 2025 16:27:15 +0000 (0:00:09.544) 0:01:19.608 ********** 2025-05-31 16:28:31.604892 | orchestrator | 2025-05-31 16:28:31.604902 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-31 16:28:31.604911 | orchestrator | Saturday 31 May 2025 16:27:15 +0000 (0:00:00.053) 0:01:19.661 ********** 2025-05-31 16:28:31.604921 | orchestrator | 2025-05-31 16:28:31.604931 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-31 16:28:31.604940 | orchestrator | Saturday 31 May 2025 16:27:15 +0000 (0:00:00.052) 0:01:19.713 ********** 2025-05-31 16:28:31.604950 | orchestrator | 2025-05-31 16:28:31.604959 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-05-31 16:28:31.604968 | orchestrator | Saturday 31 May 2025 16:27:15 +0000 (0:00:00.056) 0:01:19.770 ********** 2025-05-31 16:28:31.604978 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:28:31.604987 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:28:31.604997 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:28:31.605006 | orchestrator | 2025-05-31 16:28:31.605016 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-05-31 16:28:31.605025 | orchestrator | Saturday 31 May 2025 16:27:30 +0000 (0:00:14.160) 0:01:33.931 ********** 2025-05-31 16:28:31.605035 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:28:31.605044 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:28:31.605054 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:28:31.605063 | orchestrator | 2025-05-31 16:28:31.605072 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-05-31 16:28:31.605082 | orchestrator | Saturday 31 May 2025 16:27:39 +0000 (0:00:09.865) 0:01:43.796 ********** 2025-05-31 16:28:31.605091 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:28:31.605101 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:28:31.605110 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:28:31.605120 | orchestrator | 2025-05-31 16:28:31.605129 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-31 16:28:31.605139 | orchestrator | Saturday 31 May 2025 16:27:45 +0000 (0:00:05.319) 0:01:49.116 ********** 2025-05-31 16:28:31.605148 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:28:31.605158 | orchestrator | 2025-05-31 16:28:31.605167 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-05-31 16:28:31.605183 | orchestrator | Saturday 31 May 2025 16:27:45 +0000 (0:00:00.764) 0:01:49.880 ********** 2025-05-31 16:28:31.605192 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:28:31.605202 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:28:31.605212 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:28:31.605221 | orchestrator | 2025-05-31 16:28:31.605231 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-05-31 16:28:31.605240 | orchestrator | Saturday 31 May 2025 16:27:46 +0000 (0:00:00.987) 0:01:50.868 ********** 2025-05-31 16:28:31.605250 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:28:31.605259 | orchestrator | 2025-05-31 16:28:31.605269 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-05-31 16:28:31.605278 | orchestrator | Saturday 31 May 2025 16:27:48 +0000 (0:00:01.489) 0:01:52.357 ********** 2025-05-31 16:28:31.605288 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-05-31 16:28:31.605298 | orchestrator | 2025-05-31 16:28:31.605307 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-05-31 16:28:31.605316 | orchestrator | Saturday 31 May 2025 16:27:58 +0000 (0:00:09.789) 0:02:02.147 ********** 2025-05-31 16:28:31.605326 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-05-31 16:28:31.605335 | orchestrator | 2025-05-31 16:28:31.605349 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-05-31 16:28:31.605359 | orchestrator | Saturday 31 May 2025 16:28:17 +0000 (0:00:19.493) 0:02:21.640 ********** 2025-05-31 16:28:31.605368 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-05-31 16:28:31.605378 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-05-31 16:28:31.605387 | orchestrator | 2025-05-31 16:28:31.605396 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-05-31 16:28:31.605406 | orchestrator | Saturday 31 May 2025 16:28:25 +0000 (0:00:07.457) 0:02:29.098 ********** 2025-05-31 16:28:31.605415 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:28:31.605425 | orchestrator | 2025-05-31 16:28:31.605434 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-05-31 16:28:31.605443 | orchestrator | Saturday 31 May 2025 16:28:25 +0000 (0:00:00.126) 0:02:29.225 ********** 2025-05-31 16:28:31.605453 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:28:31.605462 | orchestrator | 2025-05-31 16:28:31.605472 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-05-31 16:28:31.605486 | orchestrator | Saturday 31 May 2025 16:28:25 +0000 (0:00:00.114) 0:02:29.339 ********** 2025-05-31 16:28:31.605496 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:28:31.605506 | orchestrator | 2025-05-31 16:28:31.605515 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-05-31 16:28:31.605525 | orchestrator | Saturday 31 May 2025 16:28:25 +0000 (0:00:00.144) 0:02:29.484 ********** 2025-05-31 16:28:31.605534 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:28:31.605543 | orchestrator | 2025-05-31 16:28:31.605553 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-05-31 16:28:31.605562 | orchestrator | Saturday 31 May 2025 16:28:26 +0000 (0:00:00.408) 0:02:29.893 ********** 2025-05-31 16:28:31.605572 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:28:31.605581 | orchestrator | 2025-05-31 16:28:31.605591 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-31 16:28:31.605600 | orchestrator | Saturday 31 May 2025 16:28:29 +0000 (0:00:03.557) 0:02:33.450 ********** 2025-05-31 16:28:31.605610 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:28:31.605619 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:28:31.605629 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:28:31.605638 | orchestrator | 2025-05-31 16:28:31.605648 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:28:31.605658 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-31 16:28:31.605689 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-31 16:28:31.605700 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-31 16:28:31.605709 | orchestrator | 2025-05-31 16:28:31.605719 | orchestrator | 2025-05-31 16:28:31.605728 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 16:28:31.605738 | orchestrator | Saturday 31 May 2025 16:28:30 +0000 (0:00:00.492) 0:02:33.942 ********** 2025-05-31 16:28:31.605748 | orchestrator | =============================================================================== 2025-05-31 16:28:31.605758 | orchestrator | service-ks-register : keystone | Creating services --------------------- 19.49s 2025-05-31 16:28:31.605767 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 14.16s 2025-05-31 16:28:31.605777 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 12.90s 2025-05-31 16:28:31.605786 | orchestrator | keystone : Copying files for keystone-fernet --------------------------- 10.66s 2025-05-31 16:28:31.605795 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.87s 2025-05-31 16:28:31.605805 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint ---- 9.79s 2025-05-31 16:28:31.605814 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.55s 2025-05-31 16:28:31.605840 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 7.46s 2025-05-31 16:28:31.605850 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.46s 2025-05-31 16:28:31.605860 | orchestrator | keystone : Restart keystone container ----------------------------------- 5.32s 2025-05-31 16:28:31.605869 | orchestrator | keystone : Creating default user role ----------------------------------- 3.56s 2025-05-31 16:28:31.605879 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.41s 2025-05-31 16:28:31.605888 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.38s 2025-05-31 16:28:31.605898 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.04s 2025-05-31 16:28:31.605907 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.94s 2025-05-31 16:28:31.605917 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.59s 2025-05-31 16:28:31.605926 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.37s 2025-05-31 16:28:31.605936 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.31s 2025-05-31 16:28:31.605945 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 2.22s 2025-05-31 16:28:31.605954 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.22s 2025-05-31 16:28:31.605969 | orchestrator | 2025-05-31 16:28:31 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:28:31.605978 | orchestrator | 2025-05-31 16:28:31 | INFO  | Task 22e70a91-c6ef-4b6f-ba5f-bc702da9c6f2 is in state STARTED 2025-05-31 16:28:31.605988 | orchestrator | 2025-05-31 16:28:31 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:28:31.605998 | orchestrator | 2025-05-31 16:28:31 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:28:34.644977 | orchestrator | 2025-05-31 16:28:34 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:28:34.645209 | orchestrator | 2025-05-31 16:28:34 | INFO  | Task a01bcbfd-0d93-4aaa-8dc4-0df16e59d887 is in state STARTED 2025-05-31 16:28:34.646057 | orchestrator | 2025-05-31 16:28:34 | INFO  | Task 7ff72eab-42ef-45c5-8385-1439e30388a8 is in state SUCCESS 2025-05-31 16:28:34.647229 | orchestrator | 2025-05-31 16:28:34.647263 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-31 16:28:34.647275 | orchestrator | 2025-05-31 16:28:34.647287 | orchestrator | PLAY [Apply role fetch-keys] *************************************************** 2025-05-31 16:28:34.647298 | orchestrator | 2025-05-31 16:28:34.647309 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-05-31 16:28:34.647320 | orchestrator | Saturday 31 May 2025 16:28:06 +0000 (0:00:00.440) 0:00:00.440 ********** 2025-05-31 16:28:34.647330 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0 2025-05-31 16:28:34.647341 | orchestrator | 2025-05-31 16:28:34.647352 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-05-31 16:28:34.647362 | orchestrator | Saturday 31 May 2025 16:28:06 +0000 (0:00:00.192) 0:00:00.633 ********** 2025-05-31 16:28:34.647374 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-31 16:28:34.647385 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-05-31 16:28:34.647395 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-05-31 16:28:34.647406 | orchestrator | 2025-05-31 16:28:34.647417 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-05-31 16:28:34.647427 | orchestrator | Saturday 31 May 2025 16:28:07 +0000 (0:00:00.800) 0:00:01.433 ********** 2025-05-31 16:28:34.647438 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2025-05-31 16:28:34.647448 | orchestrator | 2025-05-31 16:28:34.647459 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-05-31 16:28:34.647469 | orchestrator | Saturday 31 May 2025 16:28:07 +0000 (0:00:00.217) 0:00:01.650 ********** 2025-05-31 16:28:34.647480 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:28:34.647491 | orchestrator | 2025-05-31 16:28:34.647502 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-05-31 16:28:34.647512 | orchestrator | Saturday 31 May 2025 16:28:08 +0000 (0:00:00.571) 0:00:02.222 ********** 2025-05-31 16:28:34.647522 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:28:34.647533 | orchestrator | 2025-05-31 16:28:34.647543 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-05-31 16:28:34.647554 | orchestrator | Saturday 31 May 2025 16:28:08 +0000 (0:00:00.119) 0:00:02.341 ********** 2025-05-31 16:28:34.647564 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:28:34.647575 | orchestrator | 2025-05-31 16:28:34.647586 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-05-31 16:28:34.647596 | orchestrator | Saturday 31 May 2025 16:28:08 +0000 (0:00:00.457) 0:00:02.799 ********** 2025-05-31 16:28:34.647607 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:28:34.647617 | orchestrator | 2025-05-31 16:28:34.647628 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-05-31 16:28:34.647638 | orchestrator | Saturday 31 May 2025 16:28:08 +0000 (0:00:00.136) 0:00:02.935 ********** 2025-05-31 16:28:34.647649 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:28:34.647659 | orchestrator | 2025-05-31 16:28:34.647670 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-05-31 16:28:34.647680 | orchestrator | Saturday 31 May 2025 16:28:08 +0000 (0:00:00.129) 0:00:03.065 ********** 2025-05-31 16:28:34.647691 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:28:34.647701 | orchestrator | 2025-05-31 16:28:34.647712 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-05-31 16:28:34.647722 | orchestrator | Saturday 31 May 2025 16:28:09 +0000 (0:00:00.128) 0:00:03.194 ********** 2025-05-31 16:28:34.647733 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:28:34.647744 | orchestrator | 2025-05-31 16:28:34.647755 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-05-31 16:28:34.647765 | orchestrator | Saturday 31 May 2025 16:28:09 +0000 (0:00:00.127) 0:00:03.321 ********** 2025-05-31 16:28:34.647776 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:28:34.647786 | orchestrator | 2025-05-31 16:28:34.647797 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-31 16:28:34.647819 | orchestrator | Saturday 31 May 2025 16:28:09 +0000 (0:00:00.267) 0:00:03.589 ********** 2025-05-31 16:28:34.647869 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-31 16:28:34.647891 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-31 16:28:34.647911 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-31 16:28:34.647924 | orchestrator | 2025-05-31 16:28:34.647936 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-05-31 16:28:34.647948 | orchestrator | Saturday 31 May 2025 16:28:10 +0000 (0:00:00.635) 0:00:04.225 ********** 2025-05-31 16:28:34.647960 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:28:34.647971 | orchestrator | 2025-05-31 16:28:34.647983 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-05-31 16:28:34.647995 | orchestrator | Saturday 31 May 2025 16:28:10 +0000 (0:00:00.277) 0:00:04.503 ********** 2025-05-31 16:28:34.648019 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-31 16:28:34.648032 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-31 16:28:34.648044 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-31 16:28:34.648056 | orchestrator | 2025-05-31 16:28:34.648068 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-05-31 16:28:34.648080 | orchestrator | Saturday 31 May 2025 16:28:12 +0000 (0:00:01.907) 0:00:06.411 ********** 2025-05-31 16:28:34.648092 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-31 16:28:34.648104 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-31 16:28:34.648118 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-31 16:28:34.648137 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:28:34.648156 | orchestrator | 2025-05-31 16:28:34.648284 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-05-31 16:28:34.648314 | orchestrator | Saturday 31 May 2025 16:28:12 +0000 (0:00:00.418) 0:00:06.829 ********** 2025-05-31 16:28:34.648326 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-31 16:28:34.648340 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-31 16:28:34.648351 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-31 16:28:34.648362 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:28:34.648372 | orchestrator | 2025-05-31 16:28:34.648383 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-05-31 16:28:34.648393 | orchestrator | Saturday 31 May 2025 16:28:13 +0000 (0:00:00.814) 0:00:07.644 ********** 2025-05-31 16:28:34.648406 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-31 16:28:34.648419 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-31 16:28:34.648441 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-31 16:28:34.648452 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:28:34.648463 | orchestrator | 2025-05-31 16:28:34.648474 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-05-31 16:28:34.648484 | orchestrator | Saturday 31 May 2025 16:28:13 +0000 (0:00:00.172) 0:00:07.817 ********** 2025-05-31 16:28:34.648496 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '3fa7fc57c0c6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-31 16:28:11.074538', 'end': '2025-05-31 16:28:11.118171', 'delta': '0:00:00.043633', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3fa7fc57c0c6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-05-31 16:28:34.648516 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '6c7f55223df5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-31 16:28:11.609406', 'end': '2025-05-31 16:28:11.659719', 'delta': '0:00:00.050313', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6c7f55223df5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-05-31 16:28:34.648536 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '427a0ac582ac', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-31 16:28:12.158333', 'end': '2025-05-31 16:28:12.195573', 'delta': '0:00:00.037240', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['427a0ac582ac'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-05-31 16:28:34.648548 | orchestrator | 2025-05-31 16:28:34.648559 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-05-31 16:28:34.648569 | orchestrator | Saturday 31 May 2025 16:28:13 +0000 (0:00:00.196) 0:00:08.013 ********** 2025-05-31 16:28:34.648580 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:28:34.648590 | orchestrator | 2025-05-31 16:28:34.648601 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-05-31 16:28:34.648611 | orchestrator | Saturday 31 May 2025 16:28:14 +0000 (0:00:00.261) 0:00:08.274 ********** 2025-05-31 16:28:34.648621 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2025-05-31 16:28:34.648632 | orchestrator | 2025-05-31 16:28:34.648643 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-05-31 16:28:34.648653 | orchestrator | Saturday 31 May 2025 16:28:15 +0000 (0:00:01.543) 0:00:09.817 ********** 2025-05-31 16:28:34.648714 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:28:34.648726 | orchestrator | 2025-05-31 16:28:34.648737 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-05-31 16:28:34.648747 | orchestrator | Saturday 31 May 2025 16:28:15 +0000 (0:00:00.132) 0:00:09.949 ********** 2025-05-31 16:28:34.648758 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:28:34.648769 | orchestrator | 2025-05-31 16:28:34.648779 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-31 16:28:34.648790 | orchestrator | Saturday 31 May 2025 16:28:16 +0000 (0:00:00.226) 0:00:10.176 ********** 2025-05-31 16:28:34.648801 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:28:34.648811 | orchestrator | 2025-05-31 16:28:34.648878 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-05-31 16:28:34.648894 | orchestrator | Saturday 31 May 2025 16:28:16 +0000 (0:00:00.114) 0:00:10.291 ********** 2025-05-31 16:28:34.648904 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:28:34.648915 | orchestrator | 2025-05-31 16:28:34.648925 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-05-31 16:28:34.648936 | orchestrator | Saturday 31 May 2025 16:28:16 +0000 (0:00:00.131) 0:00:10.423 ********** 2025-05-31 16:28:34.648946 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:28:34.648957 | orchestrator | 2025-05-31 16:28:34.648967 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-31 16:28:34.648978 | orchestrator | Saturday 31 May 2025 16:28:16 +0000 (0:00:00.214) 0:00:10.638 ********** 2025-05-31 16:28:34.648988 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:28:34.648998 | orchestrator | 2025-05-31 16:28:34.649009 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-05-31 16:28:34.649019 | orchestrator | Saturday 31 May 2025 16:28:16 +0000 (0:00:00.127) 0:00:10.765 ********** 2025-05-31 16:28:34.649030 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:28:34.649040 | orchestrator | 2025-05-31 16:28:34.649050 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-05-31 16:28:34.649061 | orchestrator | Saturday 31 May 2025 16:28:16 +0000 (0:00:00.126) 0:00:10.892 ********** 2025-05-31 16:28:34.649071 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:28:34.649082 | orchestrator | 2025-05-31 16:28:34.649092 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-05-31 16:28:34.649103 | orchestrator | Saturday 31 May 2025 16:28:16 +0000 (0:00:00.111) 0:00:11.003 ********** 2025-05-31 16:28:34.649113 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:28:34.649123 | orchestrator | 2025-05-31 16:28:34.649134 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-05-31 16:28:34.649144 | orchestrator | Saturday 31 May 2025 16:28:17 +0000 (0:00:00.130) 0:00:11.133 ********** 2025-05-31 16:28:34.649155 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:28:34.649165 | orchestrator | 2025-05-31 16:28:34.649176 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-05-31 16:28:34.649186 | orchestrator | Saturday 31 May 2025 16:28:17 +0000 (0:00:00.126) 0:00:11.259 ********** 2025-05-31 16:28:34.649197 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:28:34.649207 | orchestrator | 2025-05-31 16:28:34.649218 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-31 16:28:34.649228 | orchestrator | Saturday 31 May 2025 16:28:17 +0000 (0:00:00.316) 0:00:11.576 ********** 2025-05-31 16:28:34.649244 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:28:34.649255 | orchestrator | 2025-05-31 16:28:34.649265 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-05-31 16:28:34.649276 | orchestrator | Saturday 31 May 2025 16:28:17 +0000 (0:00:00.122) 0:00:11.699 ********** 2025-05-31 16:28:34.649287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:28:34.649314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:28:34.649326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:28:34.649337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:28:34.649348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:28:34.649359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:28:34.649370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:28:34.649381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 16:28:34.649408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4907d85-21a9-4777-b71a-1559c505de70', 'scsi-SQEMU_QEMU_HARDDISK_f4907d85-21a9-4777-b71a-1559c505de70'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4907d85-21a9-4777-b71a-1559c505de70-part1', 'scsi-SQEMU_QEMU_HARDDISK_f4907d85-21a9-4777-b71a-1559c505de70-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4907d85-21a9-4777-b71a-1559c505de70-part14', 'scsi-SQEMU_QEMU_HARDDISK_f4907d85-21a9-4777-b71a-1559c505de70-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4907d85-21a9-4777-b71a-1559c505de70-part15', 'scsi-SQEMU_QEMU_HARDDISK_f4907d85-21a9-4777-b71a-1559c505de70-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4907d85-21a9-4777-b71a-1559c505de70-part16', 'scsi-SQEMU_QEMU_HARDDISK_f4907d85-21a9-4777-b71a-1559c505de70-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 16:28:34.649428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-31-15-28-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 16:28:34.649440 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:28:34.649451 | orchestrator | 2025-05-31 16:28:34.649462 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-05-31 16:28:34.649473 | orchestrator | Saturday 31 May 2025 16:28:17 +0000 (0:00:00.234) 0:00:11.933 ********** 2025-05-31 16:28:34.649483 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:28:34.649494 | orchestrator | 2025-05-31 16:28:34.649504 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-05-31 16:28:34.649515 | orchestrator | Saturday 31 May 2025 16:28:18 +0000 (0:00:00.234) 0:00:12.167 ********** 2025-05-31 16:28:34.649525 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:28:34.649536 | orchestrator | 2025-05-31 16:28:34.649546 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-05-31 16:28:34.649557 | orchestrator | Saturday 31 May 2025 16:28:18 +0000 (0:00:00.126) 0:00:12.293 ********** 2025-05-31 16:28:34.649568 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:28:34.649578 | orchestrator | 2025-05-31 16:28:34.649589 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-05-31 16:28:34.649599 | orchestrator | Saturday 31 May 2025 16:28:18 +0000 (0:00:00.141) 0:00:12.435 ********** 2025-05-31 16:28:34.649610 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:28:34.649620 | orchestrator | 2025-05-31 16:28:34.649631 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-05-31 16:28:34.649641 | orchestrator | Saturday 31 May 2025 16:28:18 +0000 (0:00:00.498) 0:00:12.934 ********** 2025-05-31 16:28:34.649652 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:28:34.649662 | orchestrator | 2025-05-31 16:28:34.649673 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-31 16:28:34.649683 | orchestrator | Saturday 31 May 2025 16:28:18 +0000 (0:00:00.138) 0:00:13.072 ********** 2025-05-31 16:28:34.649694 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:28:34.649704 | orchestrator | 2025-05-31 16:28:34.649715 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-31 16:28:34.649732 | orchestrator | Saturday 31 May 2025 16:28:19 +0000 (0:00:00.455) 0:00:13.528 ********** 2025-05-31 16:28:34.649743 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:28:34.649753 | orchestrator | 2025-05-31 16:28:34.649764 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-31 16:28:34.649774 | orchestrator | Saturday 31 May 2025 16:28:19 +0000 (0:00:00.155) 0:00:13.683 ********** 2025-05-31 16:28:34.649785 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:28:34.649795 | orchestrator | 2025-05-31 16:28:34.649806 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-31 16:28:34.649816 | orchestrator | Saturday 31 May 2025 16:28:20 +0000 (0:00:00.576) 0:00:14.260 ********** 2025-05-31 16:28:34.649847 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:28:34.649859 | orchestrator | 2025-05-31 16:28:34.649869 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-05-31 16:28:34.649880 | orchestrator | Saturday 31 May 2025 16:28:20 +0000 (0:00:00.153) 0:00:14.413 ********** 2025-05-31 16:28:34.649890 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-31 16:28:34.649901 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-31 16:28:34.649912 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-31 16:28:34.649922 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:28:34.649933 | orchestrator | 2025-05-31 16:28:34.649943 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-05-31 16:28:34.649954 | orchestrator | Saturday 31 May 2025 16:28:20 +0000 (0:00:00.475) 0:00:14.889 ********** 2025-05-31 16:28:34.649964 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-31 16:28:34.649975 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-31 16:28:34.649985 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-31 16:28:34.649996 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:28:34.650006 | orchestrator | 2025-05-31 16:28:34.650065 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-05-31 16:28:34.650079 | orchestrator | Saturday 31 May 2025 16:28:21 +0000 (0:00:00.449) 0:00:15.339 ********** 2025-05-31 16:28:34.650090 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-31 16:28:34.650101 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-31 16:28:34.650111 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-31 16:28:34.650122 | orchestrator | 2025-05-31 16:28:34.650133 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-05-31 16:28:34.650143 | orchestrator | Saturday 31 May 2025 16:28:22 +0000 (0:00:01.094) 0:00:16.433 ********** 2025-05-31 16:28:34.650154 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-31 16:28:34.650164 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-31 16:28:34.650175 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-31 16:28:34.650186 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:28:34.650197 | orchestrator | 2025-05-31 16:28:34.650207 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-05-31 16:28:34.650218 | orchestrator | Saturday 31 May 2025 16:28:22 +0000 (0:00:00.184) 0:00:16.618 ********** 2025-05-31 16:28:34.650229 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-31 16:28:34.650239 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-31 16:28:34.650250 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-31 16:28:34.650261 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:28:34.650271 | orchestrator | 2025-05-31 16:28:34.650282 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-05-31 16:28:34.650293 | orchestrator | Saturday 31 May 2025 16:28:22 +0000 (0:00:00.226) 0:00:16.844 ********** 2025-05-31 16:28:34.650304 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-05-31 16:28:34.650321 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-31 16:28:34.650333 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-31 16:28:34.650344 | orchestrator | 2025-05-31 16:28:34.650354 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-05-31 16:28:34.650365 | orchestrator | Saturday 31 May 2025 16:28:22 +0000 (0:00:00.207) 0:00:17.051 ********** 2025-05-31 16:28:34.650375 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:28:34.650386 | orchestrator | 2025-05-31 16:28:34.650396 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-05-31 16:28:34.650407 | orchestrator | Saturday 31 May 2025 16:28:23 +0000 (0:00:00.123) 0:00:17.175 ********** 2025-05-31 16:28:34.650417 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:28:34.650428 | orchestrator | 2025-05-31 16:28:34.650538 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-05-31 16:28:34.650561 | orchestrator | Saturday 31 May 2025 16:28:23 +0000 (0:00:00.116) 0:00:17.291 ********** 2025-05-31 16:28:34.650571 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-31 16:28:34.650582 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-31 16:28:34.650593 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-31 16:28:34.650603 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-31 16:28:34.650614 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-31 16:28:34.650624 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-31 16:28:34.650635 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-31 16:28:34.650645 | orchestrator | 2025-05-31 16:28:34.650656 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-05-31 16:28:34.650666 | orchestrator | Saturday 31 May 2025 16:28:24 +0000 (0:00:01.138) 0:00:18.430 ********** 2025-05-31 16:28:34.650677 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-31 16:28:34.650687 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-31 16:28:34.650698 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-31 16:28:34.650708 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-31 16:28:34.650719 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-31 16:28:34.650729 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-31 16:28:34.650744 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-31 16:28:34.650755 | orchestrator | 2025-05-31 16:28:34.650766 | orchestrator | TASK [ceph-fetch-keys : lookup keys in /etc/ceph] ****************************** 2025-05-31 16:28:34.650776 | orchestrator | Saturday 31 May 2025 16:28:25 +0000 (0:00:01.437) 0:00:19.868 ********** 2025-05-31 16:28:34.650787 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:28:34.650797 | orchestrator | 2025-05-31 16:28:34.650808 | orchestrator | TASK [ceph-fetch-keys : create a local fetch directory if it does not exist] *** 2025-05-31 16:28:34.650818 | orchestrator | Saturday 31 May 2025 16:28:26 +0000 (0:00:00.509) 0:00:20.377 ********** 2025-05-31 16:28:34.650852 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-31 16:28:34.650864 | orchestrator | 2025-05-31 16:28:34.650874 | orchestrator | TASK [ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/] *** 2025-05-31 16:28:34.650885 | orchestrator | Saturday 31 May 2025 16:28:26 +0000 (0:00:00.586) 0:00:20.964 ********** 2025-05-31 16:28:34.650906 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.admin.keyring) 2025-05-31 16:28:34.650917 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder-backup.keyring) 2025-05-31 16:28:34.650935 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder.keyring) 2025-05-31 16:28:34.650946 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.crash.keyring) 2025-05-31 16:28:34.650957 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.glance.keyring) 2025-05-31 16:28:34.650967 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.gnocchi.keyring) 2025-05-31 16:28:34.650978 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.manila.keyring) 2025-05-31 16:28:34.650988 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.nova.keyring) 2025-05-31 16:28:34.650998 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-0.keyring) 2025-05-31 16:28:34.651009 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-1.keyring) 2025-05-31 16:28:34.651019 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-2.keyring) 2025-05-31 16:28:34.651030 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mon.keyring) 2025-05-31 16:28:34.651040 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) 2025-05-31 16:28:34.651051 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) 2025-05-31 16:28:34.651061 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) 2025-05-31 16:28:34.651072 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) 2025-05-31 16:28:34.651082 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr/ceph.keyring) 2025-05-31 16:28:34.651093 | orchestrator | 2025-05-31 16:28:34.651104 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:28:34.651114 | orchestrator | testbed-node-0 : ok=28  changed=3  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-05-31 16:28:34.651125 | orchestrator | 2025-05-31 16:28:34.651136 | orchestrator | 2025-05-31 16:28:34.651147 | orchestrator | 2025-05-31 16:28:34.651158 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 16:28:34.651168 | orchestrator | Saturday 31 May 2025 16:28:33 +0000 (0:00:06.433) 0:00:27.398 ********** 2025-05-31 16:28:34.651179 | orchestrator | =============================================================================== 2025-05-31 16:28:34.651189 | orchestrator | ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/ --- 6.43s 2025-05-31 16:28:34.651200 | orchestrator | ceph-facts : find a running mon container ------------------------------- 1.91s 2025-05-31 16:28:34.651211 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.54s 2025-05-31 16:28:34.651221 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 1.44s 2025-05-31 16:28:34.651232 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 1.14s 2025-05-31 16:28:34.651243 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 1.09s 2025-05-31 16:28:34.651253 | orchestrator | ceph-facts : check if the ceph mon socket is in-use --------------------- 0.82s 2025-05-31 16:28:34.651264 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.80s 2025-05-31 16:28:34.651274 | orchestrator | ceph-facts : set_fact monitor_name ansible_facts['hostname'] ------------ 0.64s 2025-05-31 16:28:34.651285 | orchestrator | ceph-fetch-keys : create a local fetch directory if it does not exist --- 0.59s 2025-05-31 16:28:34.651295 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.58s 2025-05-31 16:28:34.651306 | orchestrator | ceph-facts : check if it is atomic host --------------------------------- 0.57s 2025-05-31 16:28:34.651316 | orchestrator | ceph-fetch-keys : lookup keys in /etc/ceph ------------------------------ 0.51s 2025-05-31 16:28:34.651327 | orchestrator | ceph-facts : check if the ceph conf exists ------------------------------ 0.50s 2025-05-31 16:28:34.651366 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 0.48s 2025-05-31 16:28:34.651377 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.46s 2025-05-31 16:28:34.651388 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.46s 2025-05-31 16:28:34.651403 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 0.45s 2025-05-31 16:28:34.651414 | orchestrator | ceph-facts : check for a ceph mon socket -------------------------------- 0.42s 2025-05-31 16:28:34.651425 | orchestrator | ceph-facts : resolve bluestore_wal_device link(s) ----------------------- 0.32s 2025-05-31 16:28:34.651435 | orchestrator | 2025-05-31 16:28:34 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:28:34.672108 | orchestrator | 2025-05-31 16:28:34 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:28:34.672803 | orchestrator | 2025-05-31 16:28:34 | INFO  | Task 22e70a91-c6ef-4b6f-ba5f-bc702da9c6f2 is in state STARTED 2025-05-31 16:28:34.676140 | orchestrator | 2025-05-31 16:28:34 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:28:34.676170 | orchestrator | 2025-05-31 16:28:34 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:28:37.710068 | orchestrator | 2025-05-31 16:28:37 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:28:37.710476 | orchestrator | 2025-05-31 16:28:37 | INFO  | Task a01bcbfd-0d93-4aaa-8dc4-0df16e59d887 is in state STARTED 2025-05-31 16:28:37.711029 | orchestrator | 2025-05-31 16:28:37 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:28:37.711787 | orchestrator | 2025-05-31 16:28:37 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:28:37.712307 | orchestrator | 2025-05-31 16:28:37 | INFO  | Task 22e70a91-c6ef-4b6f-ba5f-bc702da9c6f2 is in state SUCCESS 2025-05-31 16:28:37.713122 | orchestrator | 2025-05-31 16:28:37 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:28:37.713149 | orchestrator | 2025-05-31 16:28:37 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:28:40.751319 | orchestrator | 2025-05-31 16:28:40 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:28:40.751863 | orchestrator | 2025-05-31 16:28:40 | INFO  | Task a01bcbfd-0d93-4aaa-8dc4-0df16e59d887 is in state STARTED 2025-05-31 16:28:40.753379 | orchestrator | 2025-05-31 16:28:40 | INFO  | Task 701f2ba2-b281-4e5e-bb0f-f1d88a775a52 is in state STARTED 2025-05-31 16:28:40.754485 | orchestrator | 2025-05-31 16:28:40 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:28:40.755694 | orchestrator | 2025-05-31 16:28:40 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:28:40.758050 | orchestrator | 2025-05-31 16:28:40 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:28:40.758075 | orchestrator | 2025-05-31 16:28:40 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:28:43.795474 | orchestrator | 2025-05-31 16:28:43 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:28:43.798697 | orchestrator | 2025-05-31 16:28:43 | INFO  | Task a01bcbfd-0d93-4aaa-8dc4-0df16e59d887 is in state STARTED 2025-05-31 16:28:43.800703 | orchestrator | 2025-05-31 16:28:43 | INFO  | Task 701f2ba2-b281-4e5e-bb0f-f1d88a775a52 is in state STARTED 2025-05-31 16:28:43.803261 | orchestrator | 2025-05-31 16:28:43 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:28:43.804935 | orchestrator | 2025-05-31 16:28:43 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:28:43.806496 | orchestrator | 2025-05-31 16:28:43 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:28:43.806526 | orchestrator | 2025-05-31 16:28:43 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:28:46.863143 | orchestrator | 2025-05-31 16:28:46 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:28:46.865091 | orchestrator | 2025-05-31 16:28:46 | INFO  | Task a01bcbfd-0d93-4aaa-8dc4-0df16e59d887 is in state STARTED 2025-05-31 16:28:46.866050 | orchestrator | 2025-05-31 16:28:46 | INFO  | Task 701f2ba2-b281-4e5e-bb0f-f1d88a775a52 is in state STARTED 2025-05-31 16:28:46.867658 | orchestrator | 2025-05-31 16:28:46 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:28:46.872210 | orchestrator | 2025-05-31 16:28:46 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:28:46.872315 | orchestrator | 2025-05-31 16:28:46 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:28:46.872326 | orchestrator | 2025-05-31 16:28:46 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:28:49.954530 | orchestrator | 2025-05-31 16:28:49 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:28:49.954945 | orchestrator | 2025-05-31 16:28:49 | INFO  | Task a01bcbfd-0d93-4aaa-8dc4-0df16e59d887 is in state STARTED 2025-05-31 16:28:49.956257 | orchestrator | 2025-05-31 16:28:49 | INFO  | Task 701f2ba2-b281-4e5e-bb0f-f1d88a775a52 is in state STARTED 2025-05-31 16:28:49.957459 | orchestrator | 2025-05-31 16:28:49 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:28:49.958982 | orchestrator | 2025-05-31 16:28:49 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:28:49.959641 | orchestrator | 2025-05-31 16:28:49 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:28:49.959663 | orchestrator | 2025-05-31 16:28:49 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:28:53.000195 | orchestrator | 2025-05-31 16:28:52 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:28:53.001800 | orchestrator | 2025-05-31 16:28:52 | INFO  | Task a01bcbfd-0d93-4aaa-8dc4-0df16e59d887 is in state STARTED 2025-05-31 16:28:53.005420 | orchestrator | 2025-05-31 16:28:53 | INFO  | Task 701f2ba2-b281-4e5e-bb0f-f1d88a775a52 is in state STARTED 2025-05-31 16:28:53.007424 | orchestrator | 2025-05-31 16:28:53 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:28:53.008891 | orchestrator | 2025-05-31 16:28:53 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:28:53.011215 | orchestrator | 2025-05-31 16:28:53 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:28:53.011907 | orchestrator | 2025-05-31 16:28:53 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:28:56.062699 | orchestrator | 2025-05-31 16:28:56 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:28:56.063765 | orchestrator | 2025-05-31 16:28:56 | INFO  | Task a01bcbfd-0d93-4aaa-8dc4-0df16e59d887 is in state STARTED 2025-05-31 16:28:56.065113 | orchestrator | 2025-05-31 16:28:56 | INFO  | Task 701f2ba2-b281-4e5e-bb0f-f1d88a775a52 is in state STARTED 2025-05-31 16:28:56.066438 | orchestrator | 2025-05-31 16:28:56 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:28:56.069262 | orchestrator | 2025-05-31 16:28:56 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:28:56.070768 | orchestrator | 2025-05-31 16:28:56 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:28:56.070801 | orchestrator | 2025-05-31 16:28:56 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:28:59.116512 | orchestrator | 2025-05-31 16:28:59 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:28:59.117022 | orchestrator | 2025-05-31 16:28:59 | INFO  | Task a01bcbfd-0d93-4aaa-8dc4-0df16e59d887 is in state STARTED 2025-05-31 16:28:59.118435 | orchestrator | 2025-05-31 16:28:59 | INFO  | Task 701f2ba2-b281-4e5e-bb0f-f1d88a775a52 is in state STARTED 2025-05-31 16:28:59.119696 | orchestrator | 2025-05-31 16:28:59 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:28:59.122092 | orchestrator | 2025-05-31 16:28:59 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:28:59.122125 | orchestrator | 2025-05-31 16:28:59 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:28:59.122138 | orchestrator | 2025-05-31 16:28:59 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:29:02.161702 | orchestrator | 2025-05-31 16:29:02 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:29:02.163313 | orchestrator | 2025-05-31 16:29:02 | INFO  | Task a01bcbfd-0d93-4aaa-8dc4-0df16e59d887 is in state STARTED 2025-05-31 16:29:02.164585 | orchestrator | 2025-05-31 16:29:02 | INFO  | Task 701f2ba2-b281-4e5e-bb0f-f1d88a775a52 is in state STARTED 2025-05-31 16:29:02.166063 | orchestrator | 2025-05-31 16:29:02 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:29:02.167024 | orchestrator | 2025-05-31 16:29:02 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:29:02.168240 | orchestrator | 2025-05-31 16:29:02 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:29:02.168263 | orchestrator | 2025-05-31 16:29:02 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:29:05.267102 | orchestrator | 2025-05-31 16:29:05 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:29:05.269428 | orchestrator | 2025-05-31 16:29:05 | INFO  | Task a01bcbfd-0d93-4aaa-8dc4-0df16e59d887 is in state STARTED 2025-05-31 16:29:05.271545 | orchestrator | 2025-05-31 16:29:05 | INFO  | Task 701f2ba2-b281-4e5e-bb0f-f1d88a775a52 is in state STARTED 2025-05-31 16:29:05.272739 | orchestrator | 2025-05-31 16:29:05 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:29:05.273769 | orchestrator | 2025-05-31 16:29:05 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:29:05.275348 | orchestrator | 2025-05-31 16:29:05 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:29:05.275379 | orchestrator | 2025-05-31 16:29:05 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:29:08.341554 | orchestrator | 2025-05-31 16:29:08 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:29:08.342148 | orchestrator | 2025-05-31 16:29:08 | INFO  | Task a01bcbfd-0d93-4aaa-8dc4-0df16e59d887 is in state STARTED 2025-05-31 16:29:08.343237 | orchestrator | 2025-05-31 16:29:08 | INFO  | Task 701f2ba2-b281-4e5e-bb0f-f1d88a775a52 is in state STARTED 2025-05-31 16:29:08.346125 | orchestrator | 2025-05-31 16:29:08 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:29:08.346167 | orchestrator | 2025-05-31 16:29:08 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:29:08.350723 | orchestrator | 2025-05-31 16:29:08 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:29:08.350778 | orchestrator | 2025-05-31 16:29:08 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:29:11.405457 | orchestrator | 2025-05-31 16:29:11 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:29:11.405558 | orchestrator | 2025-05-31 16:29:11 | INFO  | Task a01bcbfd-0d93-4aaa-8dc4-0df16e59d887 is in state STARTED 2025-05-31 16:29:11.405586 | orchestrator | 2025-05-31 16:29:11 | INFO  | Task 701f2ba2-b281-4e5e-bb0f-f1d88a775a52 is in state STARTED 2025-05-31 16:29:11.406406 | orchestrator | 2025-05-31 16:29:11 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:29:11.407326 | orchestrator | 2025-05-31 16:29:11 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:29:11.408475 | orchestrator | 2025-05-31 16:29:11 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:29:11.408503 | orchestrator | 2025-05-31 16:29:11 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:29:14.459553 | orchestrator | 2025-05-31 16:29:14 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:29:14.459660 | orchestrator | 2025-05-31 16:29:14 | INFO  | Task a01bcbfd-0d93-4aaa-8dc4-0df16e59d887 is in state STARTED 2025-05-31 16:29:14.467025 | orchestrator | 2025-05-31 16:29:14 | INFO  | Task 701f2ba2-b281-4e5e-bb0f-f1d88a775a52 is in state STARTED 2025-05-31 16:29:14.467091 | orchestrator | 2025-05-31 16:29:14 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:29:14.467105 | orchestrator | 2025-05-31 16:29:14 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:29:14.467117 | orchestrator | 2025-05-31 16:29:14 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:29:14.467129 | orchestrator | 2025-05-31 16:29:14 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:29:17.502128 | orchestrator | 2025-05-31 16:29:17 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:29:17.502406 | orchestrator | 2025-05-31 16:29:17 | INFO  | Task a01bcbfd-0d93-4aaa-8dc4-0df16e59d887 is in state STARTED 2025-05-31 16:29:17.502775 | orchestrator | 2025-05-31 16:29:17 | INFO  | Task 701f2ba2-b281-4e5e-bb0f-f1d88a775a52 is in state STARTED 2025-05-31 16:29:17.503449 | orchestrator | 2025-05-31 16:29:17 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:29:17.504010 | orchestrator | 2025-05-31 16:29:17 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:29:17.504718 | orchestrator | 2025-05-31 16:29:17 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:29:17.504760 | orchestrator | 2025-05-31 16:29:17 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:29:20.536233 | orchestrator | 2025-05-31 16:29:20 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:29:20.536337 | orchestrator | 2025-05-31 16:29:20 | INFO  | Task a01bcbfd-0d93-4aaa-8dc4-0df16e59d887 is in state STARTED 2025-05-31 16:29:20.536970 | orchestrator | 2025-05-31 16:29:20 | INFO  | Task 701f2ba2-b281-4e5e-bb0f-f1d88a775a52 is in state STARTED 2025-05-31 16:29:20.537499 | orchestrator | 2025-05-31 16:29:20 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:29:20.538090 | orchestrator | 2025-05-31 16:29:20 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:29:20.538801 | orchestrator | 2025-05-31 16:29:20 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:29:20.538856 | orchestrator | 2025-05-31 16:29:20 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:29:23.563189 | orchestrator | 2025-05-31 16:29:23 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:29:23.565505 | orchestrator | 2025-05-31 16:29:23 | INFO  | Task a01bcbfd-0d93-4aaa-8dc4-0df16e59d887 is in state STARTED 2025-05-31 16:29:23.565943 | orchestrator | 2025-05-31 16:29:23 | INFO  | Task 701f2ba2-b281-4e5e-bb0f-f1d88a775a52 is in state STARTED 2025-05-31 16:29:23.566476 | orchestrator | 2025-05-31 16:29:23 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:29:23.568044 | orchestrator | 2025-05-31 16:29:23 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:29:23.568519 | orchestrator | 2025-05-31 16:29:23 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:29:23.568542 | orchestrator | 2025-05-31 16:29:23 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:29:26.608513 | orchestrator | 2025-05-31 16:29:26 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:29:26.608938 | orchestrator | 2025-05-31 16:29:26 | INFO  | Task a01bcbfd-0d93-4aaa-8dc4-0df16e59d887 is in state STARTED 2025-05-31 16:29:26.609961 | orchestrator | 2025-05-31 16:29:26 | INFO  | Task 701f2ba2-b281-4e5e-bb0f-f1d88a775a52 is in state STARTED 2025-05-31 16:29:26.610462 | orchestrator | 2025-05-31 16:29:26 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:29:26.611474 | orchestrator | 2025-05-31 16:29:26 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:29:26.612567 | orchestrator | 2025-05-31 16:29:26 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:29:26.612597 | orchestrator | 2025-05-31 16:29:26 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:29:29.657385 | orchestrator | 2025-05-31 16:29:29 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:29:29.657775 | orchestrator | 2025-05-31 16:29:29 | INFO  | Task a01bcbfd-0d93-4aaa-8dc4-0df16e59d887 is in state STARTED 2025-05-31 16:29:29.658615 | orchestrator | 2025-05-31 16:29:29 | INFO  | Task 701f2ba2-b281-4e5e-bb0f-f1d88a775a52 is in state STARTED 2025-05-31 16:29:29.659480 | orchestrator | 2025-05-31 16:29:29 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:29:29.659934 | orchestrator | 2025-05-31 16:29:29 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:29:29.665542 | orchestrator | 2025-05-31 16:29:29 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:29:29.665569 | orchestrator | 2025-05-31 16:29:29 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:29:32.703294 | orchestrator | 2025-05-31 16:29:32 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:29:32.703632 | orchestrator | 2025-05-31 16:29:32 | INFO  | Task d614c43d-794c-447b-b80b-41a8dc78058d is in state STARTED 2025-05-31 16:29:32.704190 | orchestrator | 2025-05-31 16:29:32 | INFO  | Task a01bcbfd-0d93-4aaa-8dc4-0df16e59d887 is in state STARTED 2025-05-31 16:29:32.705118 | orchestrator | 2025-05-31 16:29:32 | INFO  | Task 701f2ba2-b281-4e5e-bb0f-f1d88a775a52 is in state SUCCESS 2025-05-31 16:29:32.705510 | orchestrator | 2025-05-31 16:29:32.705532 | orchestrator | 2025-05-31 16:29:32.705544 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-05-31 16:29:32.705576 | orchestrator | 2025-05-31 16:29:32.705588 | orchestrator | TASK [Check ceph keys] ********************************************************* 2025-05-31 16:29:32.705599 | orchestrator | Saturday 31 May 2025 16:27:57 +0000 (0:00:00.138) 0:00:00.138 ********** 2025-05-31 16:29:32.705610 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-05-31 16:29:32.705620 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-31 16:29:32.705686 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-31 16:29:32.705699 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-05-31 16:29:32.705710 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-31 16:29:32.705720 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-05-31 16:29:32.705731 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-05-31 16:29:32.705744 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-05-31 16:29:32.705763 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-05-31 16:29:32.705780 | orchestrator | 2025-05-31 16:29:32.705791 | orchestrator | TASK [Set _fetch_ceph_keys fact] *********************************************** 2025-05-31 16:29:32.705802 | orchestrator | Saturday 31 May 2025 16:28:00 +0000 (0:00:02.826) 0:00:02.965 ********** 2025-05-31 16:29:32.705813 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-05-31 16:29:32.705860 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-31 16:29:32.705872 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-31 16:29:32.705883 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-05-31 16:29:32.705893 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-31 16:29:32.705904 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-05-31 16:29:32.705915 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-05-31 16:29:32.705925 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-05-31 16:29:32.705936 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-05-31 16:29:32.705946 | orchestrator | 2025-05-31 16:29:32.705957 | orchestrator | TASK [Point out that the following task takes some time and does not give any output] *** 2025-05-31 16:29:32.705968 | orchestrator | Saturday 31 May 2025 16:28:00 +0000 (0:00:00.241) 0:00:03.206 ********** 2025-05-31 16:29:32.705978 | orchestrator | ok: [testbed-manager] => { 2025-05-31 16:29:32.705991 | orchestrator |  "msg": "The task 'Fetch ceph keys from the first monitor node' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete." 2025-05-31 16:29:32.706003 | orchestrator | } 2025-05-31 16:29:32.706062 | orchestrator | 2025-05-31 16:29:32.706077 | orchestrator | TASK [Fetch ceph keys from the first monitor node] ***************************** 2025-05-31 16:29:32.706087 | orchestrator | Saturday 31 May 2025 16:28:00 +0000 (0:00:00.175) 0:00:03.382 ********** 2025-05-31 16:29:32.706098 | orchestrator | changed: [testbed-manager] 2025-05-31 16:29:32.706109 | orchestrator | 2025-05-31 16:29:32.706119 | orchestrator | TASK [Copy ceph infrastructure keys to the configuration repository] *********** 2025-05-31 16:29:32.706130 | orchestrator | Saturday 31 May 2025 16:28:33 +0000 (0:00:32.819) 0:00:36.202 ********** 2025-05-31 16:29:32.706143 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.admin.keyring', 'dest': '/opt/configuration/environments/infrastructure/files/ceph/ceph.client.admin.keyring'}) 2025-05-31 16:29:32.706156 | orchestrator | 2025-05-31 16:29:32.706168 | orchestrator | TASK [Copy ceph kolla keys to the configuration repository] ******************** 2025-05-31 16:29:32.706180 | orchestrator | Saturday 31 May 2025 16:28:34 +0000 (0:00:00.436) 0:00:36.638 ********** 2025-05-31 16:29:32.706222 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume/ceph.client.cinder.keyring'}) 2025-05-31 16:29:32.706236 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder.keyring'}) 2025-05-31 16:29:32.706248 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder-backup.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder-backup.keyring'}) 2025-05-31 16:29:32.706261 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.cinder.keyring'}) 2025-05-31 16:29:32.706274 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.nova.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.nova.keyring'}) 2025-05-31 16:29:32.706298 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.glance.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/glance/ceph.client.glance.keyring'}) 2025-05-31 16:29:32.706312 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.gnocchi.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/gnocchi/ceph.client.gnocchi.keyring'}) 2025-05-31 16:29:32.706331 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.manila.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/manila/ceph.client.manila.keyring'}) 2025-05-31 16:29:32.706343 | orchestrator | 2025-05-31 16:29:32.706355 | orchestrator | TASK [Copy ceph custom keys to the configuration repository] ******************* 2025-05-31 16:29:32.706367 | orchestrator | Saturday 31 May 2025 16:28:36 +0000 (0:00:02.415) 0:00:39.053 ********** 2025-05-31 16:29:32.706379 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:29:32.706391 | orchestrator | 2025-05-31 16:29:32.706403 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:29:32.706415 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-31 16:29:32.706428 | orchestrator | 2025-05-31 16:29:32.706440 | orchestrator | Saturday 31 May 2025 16:28:36 +0000 (0:00:00.032) 0:00:39.085 ********** 2025-05-31 16:29:32.706452 | orchestrator | =============================================================================== 2025-05-31 16:29:32.706464 | orchestrator | Fetch ceph keys from the first monitor node ---------------------------- 32.82s 2025-05-31 16:29:32.706476 | orchestrator | Check ceph keys --------------------------------------------------------- 2.83s 2025-05-31 16:29:32.706488 | orchestrator | Copy ceph kolla keys to the configuration repository -------------------- 2.42s 2025-05-31 16:29:32.706499 | orchestrator | Copy ceph infrastructure keys to the configuration repository ----------- 0.44s 2025-05-31 16:29:32.706510 | orchestrator | Set _fetch_ceph_keys fact ----------------------------------------------- 0.24s 2025-05-31 16:29:32.706520 | orchestrator | Point out that the following task takes some time and does not give any output --- 0.18s 2025-05-31 16:29:32.706531 | orchestrator | Copy ceph custom keys to the configuration repository ------------------- 0.03s 2025-05-31 16:29:32.706541 | orchestrator | 2025-05-31 16:29:32.708110 | orchestrator | 2025-05-31 16:29:32 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:29:32.708134 | orchestrator | 2025-05-31 16:29:32 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:29:32.709412 | orchestrator | 2025-05-31 16:29:32 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:29:32.709432 | orchestrator | 2025-05-31 16:29:32 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:29:35.755511 | orchestrator | 2025-05-31 16:29:35 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:29:35.755741 | orchestrator | 2025-05-31 16:29:35 | INFO  | Task d614c43d-794c-447b-b80b-41a8dc78058d is in state STARTED 2025-05-31 16:29:35.759978 | orchestrator | 2025-05-31 16:29:35 | INFO  | Task a01bcbfd-0d93-4aaa-8dc4-0df16e59d887 is in state STARTED 2025-05-31 16:29:35.760583 | orchestrator | 2025-05-31 16:29:35 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:29:35.761227 | orchestrator | 2025-05-31 16:29:35 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:29:35.764146 | orchestrator | 2025-05-31 16:29:35 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:29:35.764191 | orchestrator | 2025-05-31 16:29:35 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:29:38.786264 | orchestrator | 2025-05-31 16:29:38 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:29:38.787454 | orchestrator | 2025-05-31 16:29:38 | INFO  | Task d614c43d-794c-447b-b80b-41a8dc78058d is in state STARTED 2025-05-31 16:29:38.787978 | orchestrator | 2025-05-31 16:29:38 | INFO  | Task a01bcbfd-0d93-4aaa-8dc4-0df16e59d887 is in state STARTED 2025-05-31 16:29:38.788458 | orchestrator | 2025-05-31 16:29:38 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:29:38.789039 | orchestrator | 2025-05-31 16:29:38 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:29:38.789962 | orchestrator | 2025-05-31 16:29:38 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:29:38.790133 | orchestrator | 2025-05-31 16:29:38 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:29:41.827336 | orchestrator | 2025-05-31 16:29:41 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:29:41.827440 | orchestrator | 2025-05-31 16:29:41 | INFO  | Task d614c43d-794c-447b-b80b-41a8dc78058d is in state STARTED 2025-05-31 16:29:41.827627 | orchestrator | 2025-05-31 16:29:41 | INFO  | Task a01bcbfd-0d93-4aaa-8dc4-0df16e59d887 is in state STARTED 2025-05-31 16:29:41.828232 | orchestrator | 2025-05-31 16:29:41 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:29:41.829938 | orchestrator | 2025-05-31 16:29:41 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:29:41.830642 | orchestrator | 2025-05-31 16:29:41 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:29:41.830700 | orchestrator | 2025-05-31 16:29:41 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:29:44.865433 | orchestrator | 2025-05-31 16:29:44 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:29:44.866919 | orchestrator | 2025-05-31 16:29:44 | INFO  | Task d614c43d-794c-447b-b80b-41a8dc78058d is in state STARTED 2025-05-31 16:29:44.866952 | orchestrator | 2025-05-31 16:29:44 | INFO  | Task a01bcbfd-0d93-4aaa-8dc4-0df16e59d887 is in state STARTED 2025-05-31 16:29:44.866969 | orchestrator | 2025-05-31 16:29:44 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:29:44.868758 | orchestrator | 2025-05-31 16:29:44 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:29:44.868780 | orchestrator | 2025-05-31 16:29:44 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:29:44.868792 | orchestrator | 2025-05-31 16:29:44 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:29:47.897812 | orchestrator | 2025-05-31 16:29:47 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:29:47.899984 | orchestrator | 2025-05-31 16:29:47 | INFO  | Task d614c43d-794c-447b-b80b-41a8dc78058d is in state STARTED 2025-05-31 16:29:47.900025 | orchestrator | 2025-05-31 16:29:47 | INFO  | Task a01bcbfd-0d93-4aaa-8dc4-0df16e59d887 is in state STARTED 2025-05-31 16:29:47.900875 | orchestrator | 2025-05-31 16:29:47 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:29:47.902435 | orchestrator | 2025-05-31 16:29:47 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:29:47.903073 | orchestrator | 2025-05-31 16:29:47 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:29:47.903108 | orchestrator | 2025-05-31 16:29:47 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:29:50.931489 | orchestrator | 2025-05-31 16:29:50 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:29:50.931920 | orchestrator | 2025-05-31 16:29:50 | INFO  | Task d614c43d-794c-447b-b80b-41a8dc78058d is in state STARTED 2025-05-31 16:29:50.932584 | orchestrator | 2025-05-31 16:29:50 | INFO  | Task a01bcbfd-0d93-4aaa-8dc4-0df16e59d887 is in state STARTED 2025-05-31 16:29:50.933480 | orchestrator | 2025-05-31 16:29:50 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:29:50.934091 | orchestrator | 2025-05-31 16:29:50 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:29:50.934747 | orchestrator | 2025-05-31 16:29:50 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:29:50.934769 | orchestrator | 2025-05-31 16:29:50 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:29:53.965607 | orchestrator | 2025-05-31 16:29:53 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:29:53.965800 | orchestrator | 2025-05-31 16:29:53 | INFO  | Task d614c43d-794c-447b-b80b-41a8dc78058d is in state STARTED 2025-05-31 16:29:53.966741 | orchestrator | 2025-05-31 16:29:53 | INFO  | Task a01bcbfd-0d93-4aaa-8dc4-0df16e59d887 is in state STARTED 2025-05-31 16:29:53.968211 | orchestrator | 2025-05-31 16:29:53 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:29:53.968747 | orchestrator | 2025-05-31 16:29:53 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:29:53.969465 | orchestrator | 2025-05-31 16:29:53 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:29:53.969492 | orchestrator | 2025-05-31 16:29:53 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:29:56.992859 | orchestrator | 2025-05-31 16:29:56 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:29:56.992952 | orchestrator | 2025-05-31 16:29:56 | INFO  | Task d614c43d-794c-447b-b80b-41a8dc78058d is in state STARTED 2025-05-31 16:29:56.993384 | orchestrator | 2025-05-31 16:29:56 | INFO  | Task a01bcbfd-0d93-4aaa-8dc4-0df16e59d887 is in state STARTED 2025-05-31 16:29:56.993875 | orchestrator | 2025-05-31 16:29:56 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:29:56.994476 | orchestrator | 2025-05-31 16:29:56 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:29:56.995010 | orchestrator | 2025-05-31 16:29:56 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:29:56.995733 | orchestrator | 2025-05-31 16:29:56 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:30:00.023384 | orchestrator | 2025-05-31 16:30:00 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:30:00.023593 | orchestrator | 2025-05-31 16:30:00 | INFO  | Task d614c43d-794c-447b-b80b-41a8dc78058d is in state STARTED 2025-05-31 16:30:00.024160 | orchestrator | 2025-05-31 16:30:00 | INFO  | Task a01bcbfd-0d93-4aaa-8dc4-0df16e59d887 is in state STARTED 2025-05-31 16:30:00.024976 | orchestrator | 2025-05-31 16:30:00 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:30:00.025350 | orchestrator | 2025-05-31 16:30:00 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:30:00.025844 | orchestrator | 2025-05-31 16:30:00 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:30:00.025865 | orchestrator | 2025-05-31 16:30:00 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:30:03.058006 | orchestrator | 2025-05-31 16:30:03 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:30:03.058307 | orchestrator | 2025-05-31 16:30:03 | INFO  | Task d614c43d-794c-447b-b80b-41a8dc78058d is in state STARTED 2025-05-31 16:30:03.058977 | orchestrator | 2025-05-31 16:30:03 | INFO  | Task a01bcbfd-0d93-4aaa-8dc4-0df16e59d887 is in state STARTED 2025-05-31 16:30:03.059419 | orchestrator | 2025-05-31 16:30:03 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:30:03.059953 | orchestrator | 2025-05-31 16:30:03 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:30:03.063343 | orchestrator | 2025-05-31 16:30:03 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:30:03.063372 | orchestrator | 2025-05-31 16:30:03 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:30:06.089189 | orchestrator | 2025-05-31 16:30:06 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:30:06.091265 | orchestrator | 2025-05-31 16:30:06 | INFO  | Task d614c43d-794c-447b-b80b-41a8dc78058d is in state STARTED 2025-05-31 16:30:06.093376 | orchestrator | 2025-05-31 16:30:06 | INFO  | Task a01bcbfd-0d93-4aaa-8dc4-0df16e59d887 is in state STARTED 2025-05-31 16:30:06.094550 | orchestrator | 2025-05-31 16:30:06 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:30:06.096251 | orchestrator | 2025-05-31 16:30:06 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:30:06.097188 | orchestrator | 2025-05-31 16:30:06 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:30:06.097465 | orchestrator | 2025-05-31 16:30:06 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:30:09.133337 | orchestrator | 2025-05-31 16:30:09 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:30:09.133418 | orchestrator | 2025-05-31 16:30:09 | INFO  | Task d614c43d-794c-447b-b80b-41a8dc78058d is in state STARTED 2025-05-31 16:30:09.133993 | orchestrator | 2025-05-31 16:30:09 | INFO  | Task a01bcbfd-0d93-4aaa-8dc4-0df16e59d887 is in state STARTED 2025-05-31 16:30:09.134307 | orchestrator | 2025-05-31 16:30:09 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:30:09.134936 | orchestrator | 2025-05-31 16:30:09 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:30:09.135454 | orchestrator | 2025-05-31 16:30:09 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:30:09.135509 | orchestrator | 2025-05-31 16:30:09 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:30:12.179412 | orchestrator | 2025-05-31 16:30:12 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:30:12.181888 | orchestrator | 2025-05-31 16:30:12 | INFO  | Task d614c43d-794c-447b-b80b-41a8dc78058d is in state SUCCESS 2025-05-31 16:30:12.182205 | orchestrator | 2025-05-31 16:30:12.182234 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-05-31 16:30:12.182247 | orchestrator | 2025-05-31 16:30:12.182259 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-05-31 16:30:12.182270 | orchestrator | Saturday 31 May 2025 16:28:39 +0000 (0:00:00.153) 0:00:00.153 ********** 2025-05-31 16:30:12.182281 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-05-31 16:30:12.182294 | orchestrator | 2025-05-31 16:30:12.182305 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-05-31 16:30:12.182316 | orchestrator | Saturday 31 May 2025 16:28:39 +0000 (0:00:00.192) 0:00:00.345 ********** 2025-05-31 16:30:12.182350 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-05-31 16:30:12.182362 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-05-31 16:30:12.182374 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-05-31 16:30:12.182385 | orchestrator | 2025-05-31 16:30:12.182397 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-05-31 16:30:12.182408 | orchestrator | Saturday 31 May 2025 16:28:41 +0000 (0:00:01.140) 0:00:01.485 ********** 2025-05-31 16:30:12.182419 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-05-31 16:30:12.182430 | orchestrator | 2025-05-31 16:30:12.182442 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-05-31 16:30:12.182453 | orchestrator | Saturday 31 May 2025 16:28:42 +0000 (0:00:01.002) 0:00:02.488 ********** 2025-05-31 16:30:12.182464 | orchestrator | changed: [testbed-manager] 2025-05-31 16:30:12.182475 | orchestrator | 2025-05-31 16:30:12.182486 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-05-31 16:30:12.182500 | orchestrator | Saturday 31 May 2025 16:28:42 +0000 (0:00:00.772) 0:00:03.260 ********** 2025-05-31 16:30:12.182520 | orchestrator | changed: [testbed-manager] 2025-05-31 16:30:12.182532 | orchestrator | 2025-05-31 16:30:12.182543 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-05-31 16:30:12.182554 | orchestrator | Saturday 31 May 2025 16:28:43 +0000 (0:00:00.967) 0:00:04.227 ********** 2025-05-31 16:30:12.182565 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-05-31 16:30:12.182575 | orchestrator | ok: [testbed-manager] 2025-05-31 16:30:12.182586 | orchestrator | 2025-05-31 16:30:12.182597 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-05-31 16:30:12.182608 | orchestrator | Saturday 31 May 2025 16:29:23 +0000 (0:00:39.424) 0:00:43.652 ********** 2025-05-31 16:30:12.182618 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-05-31 16:30:12.182629 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-05-31 16:30:12.182640 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-05-31 16:30:12.182651 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-05-31 16:30:12.182662 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-05-31 16:30:12.182672 | orchestrator | 2025-05-31 16:30:12.182683 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-05-31 16:30:12.182694 | orchestrator | Saturday 31 May 2025 16:29:26 +0000 (0:00:03.568) 0:00:47.221 ********** 2025-05-31 16:30:12.182704 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-05-31 16:30:12.182715 | orchestrator | 2025-05-31 16:30:12.182726 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-05-31 16:30:12.182737 | orchestrator | Saturday 31 May 2025 16:29:27 +0000 (0:00:00.358) 0:00:47.580 ********** 2025-05-31 16:30:12.182759 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:30:12.182770 | orchestrator | 2025-05-31 16:30:12.182781 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-05-31 16:30:12.182792 | orchestrator | Saturday 31 May 2025 16:29:27 +0000 (0:00:00.092) 0:00:47.672 ********** 2025-05-31 16:30:12.182802 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:30:12.182813 | orchestrator | 2025-05-31 16:30:12.182824 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-05-31 16:30:12.182866 | orchestrator | Saturday 31 May 2025 16:29:27 +0000 (0:00:00.218) 0:00:47.891 ********** 2025-05-31 16:30:12.182879 | orchestrator | changed: [testbed-manager] 2025-05-31 16:30:12.182891 | orchestrator | 2025-05-31 16:30:12.182903 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-05-31 16:30:12.182915 | orchestrator | Saturday 31 May 2025 16:29:28 +0000 (0:00:01.263) 0:00:49.155 ********** 2025-05-31 16:30:12.182928 | orchestrator | changed: [testbed-manager] 2025-05-31 16:30:12.182940 | orchestrator | 2025-05-31 16:30:12.182952 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-05-31 16:30:12.182964 | orchestrator | Saturday 31 May 2025 16:29:29 +0000 (0:00:00.863) 0:00:50.019 ********** 2025-05-31 16:30:12.182976 | orchestrator | changed: [testbed-manager] 2025-05-31 16:30:12.183041 | orchestrator | 2025-05-31 16:30:12.183054 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-05-31 16:30:12.183066 | orchestrator | Saturday 31 May 2025 16:29:30 +0000 (0:00:00.506) 0:00:50.525 ********** 2025-05-31 16:30:12.183078 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-05-31 16:30:12.183090 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-05-31 16:30:12.183103 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-05-31 16:30:12.183114 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-05-31 16:30:12.183127 | orchestrator | 2025-05-31 16:30:12.183139 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:30:12.183152 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 16:30:12.183164 | orchestrator | 2025-05-31 16:30:12.183188 | orchestrator | Saturday 31 May 2025 16:29:31 +0000 (0:00:01.275) 0:00:51.800 ********** 2025-05-31 16:30:12.183199 | orchestrator | =============================================================================== 2025-05-31 16:30:12.183210 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 39.43s 2025-05-31 16:30:12.183221 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.57s 2025-05-31 16:30:12.183231 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.28s 2025-05-31 16:30:12.183242 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.26s 2025-05-31 16:30:12.183253 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.14s 2025-05-31 16:30:12.183263 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.00s 2025-05-31 16:30:12.183280 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.97s 2025-05-31 16:30:12.183291 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.86s 2025-05-31 16:30:12.183302 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.77s 2025-05-31 16:30:12.183313 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.51s 2025-05-31 16:30:12.183323 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.36s 2025-05-31 16:30:12.183334 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.22s 2025-05-31 16:30:12.183345 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.19s 2025-05-31 16:30:12.183355 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.09s 2025-05-31 16:30:12.183366 | orchestrator | 2025-05-31 16:30:12.185018 | orchestrator | 2025-05-31 16:30:12 | INFO  | Task a01bcbfd-0d93-4aaa-8dc4-0df16e59d887 is in state STARTED 2025-05-31 16:30:12.185049 | orchestrator | 2025-05-31 16:30:12 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:30:12.186131 | orchestrator | 2025-05-31 16:30:12 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:30:12.187557 | orchestrator | 2025-05-31 16:30:12 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:30:12.187662 | orchestrator | 2025-05-31 16:30:12 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:30:15.218010 | orchestrator | 2025-05-31 16:30:15 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:30:15.218149 | orchestrator | 2025-05-31 16:30:15 | INFO  | Task a01bcbfd-0d93-4aaa-8dc4-0df16e59d887 is in state STARTED 2025-05-31 16:30:15.218163 | orchestrator | 2025-05-31 16:30:15 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:30:15.218174 | orchestrator | 2025-05-31 16:30:15 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:30:15.218185 | orchestrator | 2025-05-31 16:30:15 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:30:15.218196 | orchestrator | 2025-05-31 16:30:15 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:30:18.248499 | orchestrator | 2025-05-31 16:30:18 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:30:18.249168 | orchestrator | 2025-05-31 16:30:18 | INFO  | Task a01bcbfd-0d93-4aaa-8dc4-0df16e59d887 is in state STARTED 2025-05-31 16:30:18.250002 | orchestrator | 2025-05-31 16:30:18 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:30:18.250669 | orchestrator | 2025-05-31 16:30:18 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:30:18.251480 | orchestrator | 2025-05-31 16:30:18 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:30:18.251678 | orchestrator | 2025-05-31 16:30:18 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:30:21.293028 | orchestrator | 2025-05-31 16:30:21 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:30:21.294698 | orchestrator | 2025-05-31 16:30:21 | INFO  | Task a01bcbfd-0d93-4aaa-8dc4-0df16e59d887 is in state STARTED 2025-05-31 16:30:21.296308 | orchestrator | 2025-05-31 16:30:21 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:30:21.298087 | orchestrator | 2025-05-31 16:30:21 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:30:21.299386 | orchestrator | 2025-05-31 16:30:21 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:30:21.299539 | orchestrator | 2025-05-31 16:30:21 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:30:24.339245 | orchestrator | 2025-05-31 16:30:24 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:30:24.339334 | orchestrator | 2025-05-31 16:30:24 | INFO  | Task a01bcbfd-0d93-4aaa-8dc4-0df16e59d887 is in state STARTED 2025-05-31 16:30:24.339349 | orchestrator | 2025-05-31 16:30:24 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:30:24.339361 | orchestrator | 2025-05-31 16:30:24 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:30:24.339372 | orchestrator | 2025-05-31 16:30:24 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:30:24.339383 | orchestrator | 2025-05-31 16:30:24 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:30:27.364372 | orchestrator | 2025-05-31 16:30:27 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:30:27.364788 | orchestrator | 2025-05-31 16:30:27 | INFO  | Task a01bcbfd-0d93-4aaa-8dc4-0df16e59d887 is in state STARTED 2025-05-31 16:30:27.366099 | orchestrator | 2025-05-31 16:30:27 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:30:27.366561 | orchestrator | 2025-05-31 16:30:27 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:30:27.367555 | orchestrator | 2025-05-31 16:30:27 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:30:27.367584 | orchestrator | 2025-05-31 16:30:27 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:30:30.397887 | orchestrator | 2025-05-31 16:30:30 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:30:30.398125 | orchestrator | 2025-05-31 16:30:30 | INFO  | Task a01bcbfd-0d93-4aaa-8dc4-0df16e59d887 is in state STARTED 2025-05-31 16:30:30.398664 | orchestrator | 2025-05-31 16:30:30 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:30:30.402600 | orchestrator | 2025-05-31 16:30:30 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:30:30.403021 | orchestrator | 2025-05-31 16:30:30 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:30:30.403049 | orchestrator | 2025-05-31 16:30:30 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:30:33.434566 | orchestrator | 2025-05-31 16:30:33 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:30:33.435290 | orchestrator | 2025-05-31 16:30:33 | INFO  | Task a01bcbfd-0d93-4aaa-8dc4-0df16e59d887 is in state STARTED 2025-05-31 16:30:33.437010 | orchestrator | 2025-05-31 16:30:33 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:30:33.437352 | orchestrator | 2025-05-31 16:30:33 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:30:33.438000 | orchestrator | 2025-05-31 16:30:33 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:30:33.438176 | orchestrator | 2025-05-31 16:30:33 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:30:36.463614 | orchestrator | 2025-05-31 16:30:36 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:30:36.463677 | orchestrator | 2025-05-31 16:30:36 | INFO  | Task a01bcbfd-0d93-4aaa-8dc4-0df16e59d887 is in state STARTED 2025-05-31 16:30:36.464172 | orchestrator | 2025-05-31 16:30:36 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:30:36.464529 | orchestrator | 2025-05-31 16:30:36 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:30:36.465355 | orchestrator | 2025-05-31 16:30:36 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:30:36.465387 | orchestrator | 2025-05-31 16:30:36 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:30:39.502380 | orchestrator | 2025-05-31 16:30:39 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:30:39.502547 | orchestrator | 2025-05-31 16:30:39 | INFO  | Task a01bcbfd-0d93-4aaa-8dc4-0df16e59d887 is in state STARTED 2025-05-31 16:30:39.506076 | orchestrator | 2025-05-31 16:30:39 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:30:39.506761 | orchestrator | 2025-05-31 16:30:39 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:30:39.507587 | orchestrator | 2025-05-31 16:30:39 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:30:39.508042 | orchestrator | 2025-05-31 16:30:39 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:30:42.537738 | orchestrator | 2025-05-31 16:30:42 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:30:42.538346 | orchestrator | 2025-05-31 16:30:42 | INFO  | Task a01bcbfd-0d93-4aaa-8dc4-0df16e59d887 is in state SUCCESS 2025-05-31 16:30:42.700829 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-31 16:30:42.700919 | orchestrator | 2025-05-31 16:30:42.700934 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-05-31 16:30:42.700946 | orchestrator | 2025-05-31 16:30:42.700957 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-05-31 16:30:42.700968 | orchestrator | Saturday 31 May 2025 16:29:34 +0000 (0:00:00.412) 0:00:00.412 ********** 2025-05-31 16:30:42.700979 | orchestrator | changed: [testbed-manager] 2025-05-31 16:30:42.700991 | orchestrator | 2025-05-31 16:30:42.701001 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-05-31 16:30:42.701031 | orchestrator | Saturday 31 May 2025 16:29:36 +0000 (0:00:01.618) 0:00:02.031 ********** 2025-05-31 16:30:42.701042 | orchestrator | changed: [testbed-manager] 2025-05-31 16:30:42.701053 | orchestrator | 2025-05-31 16:30:42.701064 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-05-31 16:30:42.701074 | orchestrator | Saturday 31 May 2025 16:29:37 +0000 (0:00:00.800) 0:00:02.831 ********** 2025-05-31 16:30:42.701085 | orchestrator | changed: [testbed-manager] 2025-05-31 16:30:42.701095 | orchestrator | 2025-05-31 16:30:42.701106 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-05-31 16:30:42.701116 | orchestrator | Saturday 31 May 2025 16:29:38 +0000 (0:00:00.850) 0:00:03.682 ********** 2025-05-31 16:30:42.701127 | orchestrator | changed: [testbed-manager] 2025-05-31 16:30:42.701137 | orchestrator | 2025-05-31 16:30:42.701148 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-05-31 16:30:42.701159 | orchestrator | Saturday 31 May 2025 16:29:38 +0000 (0:00:00.920) 0:00:04.602 ********** 2025-05-31 16:30:42.701169 | orchestrator | changed: [testbed-manager] 2025-05-31 16:30:42.701180 | orchestrator | 2025-05-31 16:30:42.701190 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-05-31 16:30:42.701201 | orchestrator | Saturday 31 May 2025 16:29:39 +0000 (0:00:00.862) 0:00:05.465 ********** 2025-05-31 16:30:42.701211 | orchestrator | changed: [testbed-manager] 2025-05-31 16:30:42.701222 | orchestrator | 2025-05-31 16:30:42.701232 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-05-31 16:30:42.701243 | orchestrator | Saturday 31 May 2025 16:29:40 +0000 (0:00:00.830) 0:00:06.296 ********** 2025-05-31 16:30:42.701253 | orchestrator | changed: [testbed-manager] 2025-05-31 16:30:42.701264 | orchestrator | 2025-05-31 16:30:42.701274 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-05-31 16:30:42.701285 | orchestrator | Saturday 31 May 2025 16:29:41 +0000 (0:00:01.262) 0:00:07.558 ********** 2025-05-31 16:30:42.701295 | orchestrator | changed: [testbed-manager] 2025-05-31 16:30:42.701306 | orchestrator | 2025-05-31 16:30:42.701317 | orchestrator | TASK [Create admin user] ******************************************************* 2025-05-31 16:30:42.701327 | orchestrator | Saturday 31 May 2025 16:29:43 +0000 (0:00:01.123) 0:00:08.682 ********** 2025-05-31 16:30:42.701338 | orchestrator | changed: [testbed-manager] 2025-05-31 16:30:42.701348 | orchestrator | 2025-05-31 16:30:42.701359 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-05-31 16:30:42.701370 | orchestrator | Saturday 31 May 2025 16:30:03 +0000 (0:00:20.921) 0:00:29.604 ********** 2025-05-31 16:30:42.701381 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:30:42.701391 | orchestrator | 2025-05-31 16:30:42.701402 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-31 16:30:42.701436 | orchestrator | 2025-05-31 16:30:42.701448 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-31 16:30:42.701458 | orchestrator | Saturday 31 May 2025 16:30:04 +0000 (0:00:00.521) 0:00:30.125 ********** 2025-05-31 16:30:42.701469 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:30:42.701479 | orchestrator | 2025-05-31 16:30:42.701490 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-31 16:30:42.701500 | orchestrator | 2025-05-31 16:30:42.701510 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-31 16:30:42.701521 | orchestrator | Saturday 31 May 2025 16:30:06 +0000 (0:00:01.900) 0:00:32.025 ********** 2025-05-31 16:30:42.701655 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:30:42.701670 | orchestrator | 2025-05-31 16:30:42.701681 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-31 16:30:42.701691 | orchestrator | 2025-05-31 16:30:42.701702 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-31 16:30:42.701713 | orchestrator | Saturday 31 May 2025 16:30:08 +0000 (0:00:01.767) 0:00:33.793 ********** 2025-05-31 16:30:42.701723 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:30:42.701733 | orchestrator | 2025-05-31 16:30:42.701744 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:30:42.701755 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-31 16:30:42.701767 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:30:42.701778 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:30:42.701789 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:30:42.701800 | orchestrator | 2025-05-31 16:30:42.701810 | orchestrator | 2025-05-31 16:30:42.701821 | orchestrator | 2025-05-31 16:30:42.701856 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 16:30:42.701868 | orchestrator | Saturday 31 May 2025 16:30:09 +0000 (0:00:01.440) 0:00:35.234 ********** 2025-05-31 16:30:42.701879 | orchestrator | =============================================================================== 2025-05-31 16:30:42.701889 | orchestrator | Create admin user ------------------------------------------------------ 20.92s 2025-05-31 16:30:42.701917 | orchestrator | Restart ceph manager service -------------------------------------------- 5.11s 2025-05-31 16:30:42.701929 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.62s 2025-05-31 16:30:42.701940 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.26s 2025-05-31 16:30:42.701950 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.12s 2025-05-31 16:30:42.701961 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 0.92s 2025-05-31 16:30:42.701971 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.86s 2025-05-31 16:30:42.701989 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.85s 2025-05-31 16:30:42.702000 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.83s 2025-05-31 16:30:42.702010 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.80s 2025-05-31 16:30:42.702066 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.52s 2025-05-31 16:30:42.702078 | orchestrator | 2025-05-31 16:30:42.702088 | orchestrator | 2025-05-31 16:30:42.702099 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-31 16:30:42.702110 | orchestrator | 2025-05-31 16:30:42.702120 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-31 16:30:42.702140 | orchestrator | Saturday 31 May 2025 16:28:34 +0000 (0:00:00.438) 0:00:00.438 ********** 2025-05-31 16:30:42.702151 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:30:42.702163 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:30:42.702173 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:30:42.702184 | orchestrator | 2025-05-31 16:30:42.702195 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-31 16:30:42.702206 | orchestrator | Saturday 31 May 2025 16:28:34 +0000 (0:00:00.406) 0:00:00.844 ********** 2025-05-31 16:30:42.702217 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-05-31 16:30:42.702228 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-05-31 16:30:42.702239 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-05-31 16:30:42.702249 | orchestrator | 2025-05-31 16:30:42.702260 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-05-31 16:30:42.702271 | orchestrator | 2025-05-31 16:30:42.702281 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-31 16:30:42.702292 | orchestrator | Saturday 31 May 2025 16:28:34 +0000 (0:00:00.328) 0:00:01.172 ********** 2025-05-31 16:30:42.702303 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:30:42.702314 | orchestrator | 2025-05-31 16:30:42.702324 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-05-31 16:30:42.702335 | orchestrator | Saturday 31 May 2025 16:28:35 +0000 (0:00:00.785) 0:00:01.957 ********** 2025-05-31 16:30:42.702346 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-05-31 16:30:42.702356 | orchestrator | 2025-05-31 16:30:42.702367 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-05-31 16:30:42.702378 | orchestrator | Saturday 31 May 2025 16:28:39 +0000 (0:00:03.734) 0:00:05.692 ********** 2025-05-31 16:30:42.702389 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-05-31 16:30:42.702400 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-05-31 16:30:42.702411 | orchestrator | 2025-05-31 16:30:42.702422 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-05-31 16:30:42.702433 | orchestrator | Saturday 31 May 2025 16:28:46 +0000 (0:00:07.250) 0:00:12.943 ********** 2025-05-31 16:30:42.702444 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-05-31 16:30:42.702454 | orchestrator | 2025-05-31 16:30:42.702465 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-05-31 16:30:42.702476 | orchestrator | Saturday 31 May 2025 16:28:50 +0000 (0:00:03.718) 0:00:16.661 ********** 2025-05-31 16:30:42.702486 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-31 16:30:42.702497 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-05-31 16:30:42.702508 | orchestrator | 2025-05-31 16:30:42.702518 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-05-31 16:30:42.702529 | orchestrator | Saturday 31 May 2025 16:28:54 +0000 (0:00:04.409) 0:00:21.071 ********** 2025-05-31 16:30:42.702540 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-31 16:30:42.702550 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-05-31 16:30:42.702561 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-05-31 16:30:42.702572 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-05-31 16:30:42.702583 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-05-31 16:30:42.702594 | orchestrator | 2025-05-31 16:30:42.702605 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-05-31 16:30:42.702615 | orchestrator | Saturday 31 May 2025 16:29:11 +0000 (0:00:16.763) 0:00:37.834 ********** 2025-05-31 16:30:42.702626 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-05-31 16:30:42.702637 | orchestrator | 2025-05-31 16:30:42.702647 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-05-31 16:30:42.702664 | orchestrator | Saturday 31 May 2025 16:29:16 +0000 (0:00:04.601) 0:00:42.436 ********** 2025-05-31 16:30:42.702700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-31 16:30:42.702719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-31 16:30:42.702732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-31 16:30:42.702744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-31 16:30:42.702756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-31 16:30:42.702788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-31 16:30:42.702801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-31 16:30:42.702812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-31 16:30:42.702824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-31 16:30:42.702863 | orchestrator | 2025-05-31 16:30:42.702875 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-05-31 16:30:42.702885 | orchestrator | Saturday 31 May 2025 16:29:19 +0000 (0:00:03.175) 0:00:45.611 ********** 2025-05-31 16:30:42.702896 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-05-31 16:30:42.702907 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-05-31 16:30:42.702917 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-05-31 16:30:42.702928 | orchestrator | 2025-05-31 16:30:42.702938 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-05-31 16:30:42.702949 | orchestrator | Saturday 31 May 2025 16:29:21 +0000 (0:00:02.108) 0:00:47.719 ********** 2025-05-31 16:30:42.702960 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:30:42.703018 | orchestrator | 2025-05-31 16:30:42.703031 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-05-31 16:30:42.703042 | orchestrator | Saturday 31 May 2025 16:29:21 +0000 (0:00:00.119) 0:00:47.839 ********** 2025-05-31 16:30:42.703053 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:30:42.703064 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:30:42.703075 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:30:42.703085 | orchestrator | 2025-05-31 16:30:42.703096 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-31 16:30:42.703106 | orchestrator | Saturday 31 May 2025 16:29:21 +0000 (0:00:00.307) 0:00:48.147 ********** 2025-05-31 16:30:42.703117 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:30:42.703128 | orchestrator | 2025-05-31 16:30:42.703138 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-05-31 16:30:42.703149 | orchestrator | Saturday 31 May 2025 16:29:22 +0000 (0:00:00.533) 0:00:48.680 ********** 2025-05-31 16:30:42.703175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-31 16:30:42.703188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-31 16:30:42.703201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-31 16:30:42.703219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-31 16:30:42.703231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-31 16:30:42.703253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-31 16:30:42.703265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-31 16:30:42.703276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-31 16:30:42.703287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-31 16:30:42.703306 | orchestrator | 2025-05-31 16:30:42.703317 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-05-31 16:30:42.703327 | orchestrator | Saturday 31 May 2025 16:29:26 +0000 (0:00:04.347) 0:00:53.028 ********** 2025-05-31 16:30:42.703339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-31 16:30:42.703358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-31 16:30:42.703374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-31 16:30:42.703386 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:30:42.703397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-31 16:30:42.703409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-31 16:30:42.703432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-31 16:30:42.703444 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:30:42.703462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-31 16:30:42.703479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-31 16:30:42.703491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-31 16:30:42.703502 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:30:42.703525 | orchestrator | 2025-05-31 16:30:42.703536 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-05-31 16:30:42.703547 | orchestrator | Saturday 31 May 2025 16:29:28 +0000 (0:00:01.695) 0:00:54.724 ********** 2025-05-31 16:30:42.703558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-31 16:30:42.703576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-31 16:30:42.703595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-31 16:30:42.703612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-31 16:30:42.703633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-31 16:30:42.703644 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:30:42.703662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-31 16:30:42.703673 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:30:42.703684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-31 16:30:42.703696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-31 16:30:42.703718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-31 16:30:42.703730 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:30:42.703741 | orchestrator | 2025-05-31 16:30:42.703752 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-05-31 16:30:42.703763 | orchestrator | Saturday 31 May 2025 16:29:29 +0000 (0:00:01.613) 0:00:56.338 ********** 2025-05-31 16:30:42.703774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-31 16:30:42.703793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-31 16:30:42.703805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-31 16:30:42.703822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-31 16:30:42.703887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-31 16:30:42.703902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-31 16:30:42.703920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-31 16:30:42.703931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-31 16:30:42.703943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-31 16:30:42.703954 | orchestrator | 2025-05-31 16:30:42.703966 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-05-31 16:30:42.703977 | orchestrator | Saturday 31 May 2025 16:29:35 +0000 (0:00:05.641) 0:01:01.979 ********** 2025-05-31 16:30:42.703988 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:30:42.703999 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:30:42.704010 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:30:42.704020 | orchestrator | 2025-05-31 16:30:42.704031 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-05-31 16:30:42.704042 | orchestrator | Saturday 31 May 2025 16:29:38 +0000 (0:00:02.612) 0:01:04.592 ********** 2025-05-31 16:30:42.704059 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-31 16:30:42.704071 | orchestrator | 2025-05-31 16:30:42.704082 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-05-31 16:30:42.704093 | orchestrator | Saturday 31 May 2025 16:29:41 +0000 (0:00:03.092) 0:01:07.685 ********** 2025-05-31 16:30:42.704104 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:30:42.704114 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:30:42.704125 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:30:42.704136 | orchestrator | 2025-05-31 16:30:42.704147 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-05-31 16:30:42.704157 | orchestrator | Saturday 31 May 2025 16:29:42 +0000 (0:00:00.980) 0:01:08.665 ********** 2025-05-31 16:30:42.704173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-31 16:30:42.704195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-31 16:30:42.704208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-31 16:30:42.704220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-31 16:30:42.704244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-31 16:30:42.704262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-31 16:30:42.704274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-31 16:30:42.704285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-31 16:30:42.704297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-31 16:30:42.704308 | orchestrator | 2025-05-31 16:30:42.704319 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-05-31 16:30:42.704330 | orchestrator | Saturday 31 May 2025 16:29:53 +0000 (0:00:11.344) 0:01:20.010 ********** 2025-05-31 16:30:42.704348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-31 16:30:42.704368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-31 16:30:42.704379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-31 16:30:42.704389 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:30:42.704399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-31 16:30:42.704409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-31 16:30:42.704420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-31 16:30:42.704430 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:30:42.704454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-31 16:30:42.704472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-31 16:30:42.704483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-31 16:30:42.704492 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:30:42.704502 | orchestrator | 2025-05-31 16:30:42.704512 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-05-31 16:30:42.704521 | orchestrator | Saturday 31 May 2025 16:29:55 +0000 (0:00:01.506) 0:01:21.517 ********** 2025-05-31 16:30:42.704531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-31 16:30:42.704548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-31 16:30:42.704569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-31 16:30:42.704579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-31 16:30:42.704590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-31 16:30:42.704599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-31 16:30:42.704609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-31 16:30:42.704634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-31 16:30:42.704645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-31 16:30:42.704655 | orchestrator | 2025-05-31 16:30:42.704665 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-31 16:30:42.704674 | orchestrator | Saturday 31 May 2025 16:29:59 +0000 (0:00:03.927) 0:01:25.445 ********** 2025-05-31 16:30:42.704684 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:30:42.704693 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:30:42.704703 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:30:42.704712 | orchestrator | 2025-05-31 16:30:42.704721 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-05-31 16:30:42.704731 | orchestrator | Saturday 31 May 2025 16:29:59 +0000 (0:00:00.614) 0:01:26.060 ********** 2025-05-31 16:30:42.704740 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:30:42.704749 | orchestrator | 2025-05-31 16:30:42.704759 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-05-31 16:30:42.704768 | orchestrator | Saturday 31 May 2025 16:30:02 +0000 (0:00:02.925) 0:01:28.985 ********** 2025-05-31 16:30:42.704778 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:30:42.704787 | orchestrator | 2025-05-31 16:30:42.704796 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-05-31 16:30:42.704806 | orchestrator | Saturday 31 May 2025 16:30:05 +0000 (0:00:02.548) 0:01:31.534 ********** 2025-05-31 16:30:42.704815 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:30:42.704824 | orchestrator | 2025-05-31 16:30:42.704851 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-31 16:30:42.704869 | orchestrator | Saturday 31 May 2025 16:30:15 +0000 (0:00:10.168) 0:01:41.702 ********** 2025-05-31 16:30:42.704886 | orchestrator | 2025-05-31 16:30:42.704904 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-31 16:30:42.704920 | orchestrator | Saturday 31 May 2025 16:30:15 +0000 (0:00:00.120) 0:01:41.822 ********** 2025-05-31 16:30:42.704934 | orchestrator | 2025-05-31 16:30:42.704943 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-31 16:30:42.704952 | orchestrator | Saturday 31 May 2025 16:30:15 +0000 (0:00:00.309) 0:01:42.132 ********** 2025-05-31 16:30:42.704962 | orchestrator | 2025-05-31 16:30:42.704971 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-05-31 16:30:42.704980 | orchestrator | Saturday 31 May 2025 16:30:15 +0000 (0:00:00.112) 0:01:42.245 ********** 2025-05-31 16:30:42.704989 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:30:42.705005 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:30:42.705014 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:30:42.705024 | orchestrator | 2025-05-31 16:30:42.705033 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-05-31 16:30:42.705042 | orchestrator | Saturday 31 May 2025 16:30:24 +0000 (0:00:08.541) 0:01:50.786 ********** 2025-05-31 16:30:42.705052 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:30:42.705061 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:30:42.705070 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:30:42.705080 | orchestrator | 2025-05-31 16:30:42.705089 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-05-31 16:30:42.705098 | orchestrator | Saturday 31 May 2025 16:30:34 +0000 (0:00:10.332) 0:02:01.118 ********** 2025-05-31 16:30:42.705107 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:30:42.705117 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:30:42.705126 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:30:42.705135 | orchestrator | 2025-05-31 16:30:42.705144 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:30:42.705154 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-31 16:30:42.705164 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-31 16:30:42.705174 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-31 16:30:42.705183 | orchestrator | 2025-05-31 16:30:42.705193 | orchestrator | 2025-05-31 16:30:42.705202 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 16:30:42.705217 | orchestrator | Saturday 31 May 2025 16:30:40 +0000 (0:00:06.132) 0:02:07.251 ********** 2025-05-31 16:30:42.705227 | orchestrator | =============================================================================== 2025-05-31 16:30:42.705237 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.76s 2025-05-31 16:30:42.705246 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 11.34s 2025-05-31 16:30:42.705255 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 10.33s 2025-05-31 16:30:42.705265 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 10.17s 2025-05-31 16:30:42.705279 | orchestrator | barbican : Restart barbican-api container ------------------------------- 8.54s 2025-05-31 16:30:42.705289 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 7.25s 2025-05-31 16:30:42.705298 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 6.13s 2025-05-31 16:30:42.705357 | orchestrator | barbican : Copying over config.json files for services ------------------ 5.64s 2025-05-31 16:30:42.705369 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.60s 2025-05-31 16:30:42.705378 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.41s 2025-05-31 16:30:42.705388 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.35s 2025-05-31 16:30:42.705397 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.93s 2025-05-31 16:30:42.705407 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.73s 2025-05-31 16:30:42.705416 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.72s 2025-05-31 16:30:42.705426 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 3.18s 2025-05-31 16:30:42.705435 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 3.09s 2025-05-31 16:30:42.705445 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.92s 2025-05-31 16:30:42.705454 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.61s 2025-05-31 16:30:42.705470 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.55s 2025-05-31 16:30:42.705479 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 2.11s 2025-05-31 16:30:42.705489 | orchestrator | 2025-05-31 16:30:42 | INFO  | Task 6a61a51e-968a-411a-abe1-a33c6e8c30ea is in state STARTED 2025-05-31 16:30:42.705512 | orchestrator | 2025-05-31 16:30:42 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:30:42.705522 | orchestrator | 2025-05-31 16:30:42 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:30:42.705541 | orchestrator | 2025-05-31 16:30:42 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:30:42.705550 | orchestrator | 2025-05-31 16:30:42 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:30:45.586771 | orchestrator | 2025-05-31 16:30:45 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:30:45.587044 | orchestrator | 2025-05-31 16:30:45 | INFO  | Task 6a61a51e-968a-411a-abe1-a33c6e8c30ea is in state STARTED 2025-05-31 16:30:45.589298 | orchestrator | 2025-05-31 16:30:45 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:30:45.589751 | orchestrator | 2025-05-31 16:30:45 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:30:45.590460 | orchestrator | 2025-05-31 16:30:45 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:30:45.590497 | orchestrator | 2025-05-31 16:30:45 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:30:48.623729 | orchestrator | 2025-05-31 16:30:48 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:30:48.623890 | orchestrator | 2025-05-31 16:30:48 | INFO  | Task 6a61a51e-968a-411a-abe1-a33c6e8c30ea is in state STARTED 2025-05-31 16:30:48.623910 | orchestrator | 2025-05-31 16:30:48 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:30:48.623922 | orchestrator | 2025-05-31 16:30:48 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:30:48.623933 | orchestrator | 2025-05-31 16:30:48 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:30:48.623944 | orchestrator | 2025-05-31 16:30:48 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:30:51.649702 | orchestrator | 2025-05-31 16:30:51 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:30:51.649992 | orchestrator | 2025-05-31 16:30:51 | INFO  | Task 6a61a51e-968a-411a-abe1-a33c6e8c30ea is in state STARTED 2025-05-31 16:30:51.650688 | orchestrator | 2025-05-31 16:30:51 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:30:51.651618 | orchestrator | 2025-05-31 16:30:51 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:30:51.652115 | orchestrator | 2025-05-31 16:30:51 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:30:51.652138 | orchestrator | 2025-05-31 16:30:51 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:30:54.686113 | orchestrator | 2025-05-31 16:30:54 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:30:54.686300 | orchestrator | 2025-05-31 16:30:54 | INFO  | Task 6a61a51e-968a-411a-abe1-a33c6e8c30ea is in state STARTED 2025-05-31 16:30:54.687543 | orchestrator | 2025-05-31 16:30:54 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:30:54.687908 | orchestrator | 2025-05-31 16:30:54 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:30:54.688563 | orchestrator | 2025-05-31 16:30:54 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:30:54.688747 | orchestrator | 2025-05-31 16:30:54 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:30:57.727155 | orchestrator | 2025-05-31 16:30:57 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:30:57.727259 | orchestrator | 2025-05-31 16:30:57 | INFO  | Task 6a61a51e-968a-411a-abe1-a33c6e8c30ea is in state STARTED 2025-05-31 16:30:57.728166 | orchestrator | 2025-05-31 16:30:57 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:30:57.730171 | orchestrator | 2025-05-31 16:30:57 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:30:57.731603 | orchestrator | 2025-05-31 16:30:57 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:30:57.731636 | orchestrator | 2025-05-31 16:30:57 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:31:00.764519 | orchestrator | 2025-05-31 16:31:00 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:31:00.764616 | orchestrator | 2025-05-31 16:31:00 | INFO  | Task 6a61a51e-968a-411a-abe1-a33c6e8c30ea is in state STARTED 2025-05-31 16:31:00.765570 | orchestrator | 2025-05-31 16:31:00 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:31:00.772266 | orchestrator | 2025-05-31 16:31:00 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:31:00.773331 | orchestrator | 2025-05-31 16:31:00 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:31:00.773359 | orchestrator | 2025-05-31 16:31:00 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:31:03.802055 | orchestrator | 2025-05-31 16:31:03 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:31:03.802213 | orchestrator | 2025-05-31 16:31:03 | INFO  | Task 6a61a51e-968a-411a-abe1-a33c6e8c30ea is in state STARTED 2025-05-31 16:31:03.802300 | orchestrator | 2025-05-31 16:31:03 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:31:03.802691 | orchestrator | 2025-05-31 16:31:03 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:31:03.803282 | orchestrator | 2025-05-31 16:31:03 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:31:03.803307 | orchestrator | 2025-05-31 16:31:03 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:31:06.839868 | orchestrator | 2025-05-31 16:31:06 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:31:06.840628 | orchestrator | 2025-05-31 16:31:06 | INFO  | Task 6a61a51e-968a-411a-abe1-a33c6e8c30ea is in state STARTED 2025-05-31 16:31:06.841985 | orchestrator | 2025-05-31 16:31:06 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:31:06.843239 | orchestrator | 2025-05-31 16:31:06 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:31:06.845379 | orchestrator | 2025-05-31 16:31:06 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:31:06.845460 | orchestrator | 2025-05-31 16:31:06 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:31:09.911420 | orchestrator | 2025-05-31 16:31:09 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:31:09.911509 | orchestrator | 2025-05-31 16:31:09 | INFO  | Task 6a61a51e-968a-411a-abe1-a33c6e8c30ea is in state STARTED 2025-05-31 16:31:09.912292 | orchestrator | 2025-05-31 16:31:09 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:31:09.913334 | orchestrator | 2025-05-31 16:31:09 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:31:09.914186 | orchestrator | 2025-05-31 16:31:09 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:31:09.914212 | orchestrator | 2025-05-31 16:31:09 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:31:12.977545 | orchestrator | 2025-05-31 16:31:12 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:31:12.980589 | orchestrator | 2025-05-31 16:31:12 | INFO  | Task 6a61a51e-968a-411a-abe1-a33c6e8c30ea is in state STARTED 2025-05-31 16:31:12.982275 | orchestrator | 2025-05-31 16:31:12 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:31:12.984306 | orchestrator | 2025-05-31 16:31:12 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:31:12.985364 | orchestrator | 2025-05-31 16:31:12 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:31:12.985706 | orchestrator | 2025-05-31 16:31:12 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:31:16.026459 | orchestrator | 2025-05-31 16:31:16 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:31:16.026565 | orchestrator | 2025-05-31 16:31:16 | INFO  | Task 6a61a51e-968a-411a-abe1-a33c6e8c30ea is in state STARTED 2025-05-31 16:31:16.026582 | orchestrator | 2025-05-31 16:31:16 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:31:16.034287 | orchestrator | 2025-05-31 16:31:16 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:31:16.034336 | orchestrator | 2025-05-31 16:31:16 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:31:16.034349 | orchestrator | 2025-05-31 16:31:16 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:31:19.072010 | orchestrator | 2025-05-31 16:31:19 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:31:19.072140 | orchestrator | 2025-05-31 16:31:19 | INFO  | Task 6a61a51e-968a-411a-abe1-a33c6e8c30ea is in state STARTED 2025-05-31 16:31:19.072398 | orchestrator | 2025-05-31 16:31:19 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:31:19.073217 | orchestrator | 2025-05-31 16:31:19 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:31:19.076283 | orchestrator | 2025-05-31 16:31:19 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:31:19.076352 | orchestrator | 2025-05-31 16:31:19 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:31:22.126990 | orchestrator | 2025-05-31 16:31:22 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:31:22.127170 | orchestrator | 2025-05-31 16:31:22 | INFO  | Task 6a61a51e-968a-411a-abe1-a33c6e8c30ea is in state STARTED 2025-05-31 16:31:22.131000 | orchestrator | 2025-05-31 16:31:22 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:31:22.133885 | orchestrator | 2025-05-31 16:31:22 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:31:22.134652 | orchestrator | 2025-05-31 16:31:22 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:31:22.134682 | orchestrator | 2025-05-31 16:31:22 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:31:25.199434 | orchestrator | 2025-05-31 16:31:25 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:31:25.200628 | orchestrator | 2025-05-31 16:31:25 | INFO  | Task 6a61a51e-968a-411a-abe1-a33c6e8c30ea is in state STARTED 2025-05-31 16:31:25.205540 | orchestrator | 2025-05-31 16:31:25 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:31:25.206116 | orchestrator | 2025-05-31 16:31:25 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:31:25.207972 | orchestrator | 2025-05-31 16:31:25 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:31:25.207997 | orchestrator | 2025-05-31 16:31:25 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:31:28.257938 | orchestrator | 2025-05-31 16:31:28 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:31:28.261366 | orchestrator | 2025-05-31 16:31:28 | INFO  | Task 6a61a51e-968a-411a-abe1-a33c6e8c30ea is in state STARTED 2025-05-31 16:31:28.261809 | orchestrator | 2025-05-31 16:31:28 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:31:28.262905 | orchestrator | 2025-05-31 16:31:28 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:31:28.264573 | orchestrator | 2025-05-31 16:31:28 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:31:28.264680 | orchestrator | 2025-05-31 16:31:28 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:31:31.324570 | orchestrator | 2025-05-31 16:31:31 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:31:31.326273 | orchestrator | 2025-05-31 16:31:31 | INFO  | Task 6a61a51e-968a-411a-abe1-a33c6e8c30ea is in state STARTED 2025-05-31 16:31:31.327540 | orchestrator | 2025-05-31 16:31:31 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:31:31.330360 | orchestrator | 2025-05-31 16:31:31 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state STARTED 2025-05-31 16:31:31.331987 | orchestrator | 2025-05-31 16:31:31 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:31:31.332121 | orchestrator | 2025-05-31 16:31:31 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:31:34.378496 | orchestrator | 2025-05-31 16:31:34 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:31:34.380666 | orchestrator | 2025-05-31 16:31:34 | INFO  | Task 6a61a51e-968a-411a-abe1-a33c6e8c30ea is in state STARTED 2025-05-31 16:31:34.383725 | orchestrator | 2025-05-31 16:31:34 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:31:34.386868 | orchestrator | 2025-05-31 16:31:34 | INFO  | Task 42ec3df5-e7df-4bf3-aabe-888fe6099eaf is in state SUCCESS 2025-05-31 16:31:34.389058 | orchestrator | 2025-05-31 16:31:34.389095 | orchestrator | 2025-05-31 16:31:34.389108 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-31 16:31:34.389145 | orchestrator | 2025-05-31 16:31:34.389158 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-31 16:31:34.389170 | orchestrator | Saturday 31 May 2025 16:28:34 +0000 (0:00:00.356) 0:00:00.356 ********** 2025-05-31 16:31:34.389232 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:31:34.389245 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:31:34.389256 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:31:34.389267 | orchestrator | 2025-05-31 16:31:34.389278 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-31 16:31:34.389288 | orchestrator | Saturday 31 May 2025 16:28:34 +0000 (0:00:00.386) 0:00:00.742 ********** 2025-05-31 16:31:34.389300 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-05-31 16:31:34.389336 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-05-31 16:31:34.389347 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-05-31 16:31:34.389358 | orchestrator | 2025-05-31 16:31:34.389369 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-05-31 16:31:34.389379 | orchestrator | 2025-05-31 16:31:34.389390 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-31 16:31:34.389401 | orchestrator | Saturday 31 May 2025 16:28:34 +0000 (0:00:00.420) 0:00:01.163 ********** 2025-05-31 16:31:34.389412 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:31:34.389423 | orchestrator | 2025-05-31 16:31:34.389434 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-05-31 16:31:34.389445 | orchestrator | Saturday 31 May 2025 16:28:35 +0000 (0:00:00.609) 0:00:01.772 ********** 2025-05-31 16:31:34.389455 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-05-31 16:31:34.389540 | orchestrator | 2025-05-31 16:31:34.389552 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-05-31 16:31:34.389563 | orchestrator | Saturday 31 May 2025 16:28:39 +0000 (0:00:04.182) 0:00:05.954 ********** 2025-05-31 16:31:34.389600 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-05-31 16:31:34.389613 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-05-31 16:31:34.389624 | orchestrator | 2025-05-31 16:31:34.389636 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-05-31 16:31:34.389649 | orchestrator | Saturday 31 May 2025 16:28:46 +0000 (0:00:07.080) 0:00:13.035 ********** 2025-05-31 16:31:34.389661 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-31 16:31:34.389673 | orchestrator | 2025-05-31 16:31:34.389685 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-05-31 16:31:34.389697 | orchestrator | Saturday 31 May 2025 16:28:50 +0000 (0:00:03.664) 0:00:16.699 ********** 2025-05-31 16:31:34.389709 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-31 16:31:34.389722 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-05-31 16:31:34.389734 | orchestrator | 2025-05-31 16:31:34.389746 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-05-31 16:31:34.389758 | orchestrator | Saturday 31 May 2025 16:28:54 +0000 (0:00:04.086) 0:00:20.786 ********** 2025-05-31 16:31:34.389770 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-31 16:31:34.389782 | orchestrator | 2025-05-31 16:31:34.389794 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-05-31 16:31:34.389806 | orchestrator | Saturday 31 May 2025 16:28:57 +0000 (0:00:03.336) 0:00:24.123 ********** 2025-05-31 16:31:34.389817 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-05-31 16:31:34.389829 | orchestrator | 2025-05-31 16:31:34.389840 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-05-31 16:31:34.389878 | orchestrator | Saturday 31 May 2025 16:29:02 +0000 (0:00:04.341) 0:00:28.465 ********** 2025-05-31 16:31:34.389909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-31 16:31:34.389951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-31 16:31:34.389965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-31 16:31:34.389980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-31 16:31:34.389993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-31 16:31:34.390010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-31 16:31:34.390080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.390111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.390123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.390135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.390148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.390160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.390177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.390195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.390214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.390226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.390238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.390249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.390265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.390277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.390402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.390431 | orchestrator | 2025-05-31 16:31:34.390443 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-05-31 16:31:34.390454 | orchestrator | Saturday 31 May 2025 16:29:05 +0000 (0:00:03.326) 0:00:31.792 ********** 2025-05-31 16:31:34.390465 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:31:34.390476 | orchestrator | 2025-05-31 16:31:34.390487 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-05-31 16:31:34.390498 | orchestrator | Saturday 31 May 2025 16:29:05 +0000 (0:00:00.131) 0:00:31.923 ********** 2025-05-31 16:31:34.390509 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:31:34.390520 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:31:34.390530 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:31:34.390541 | orchestrator | 2025-05-31 16:31:34.390552 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-31 16:31:34.390562 | orchestrator | Saturday 31 May 2025 16:29:06 +0000 (0:00:00.444) 0:00:32.368 ********** 2025-05-31 16:31:34.390574 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:31:34.390585 | orchestrator | 2025-05-31 16:31:34.390595 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-05-31 16:31:34.390606 | orchestrator | Saturday 31 May 2025 16:29:06 +0000 (0:00:00.552) 0:00:32.920 ********** 2025-05-31 16:31:34.390617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-31 16:31:34.390629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-31 16:31:34.390655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-31 16:31:34.390674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-31 16:31:34.390686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-31 16:31:34.390698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-31 16:31:34.390709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.390720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.390743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.390754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.390783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.390795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.390806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.390817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.390838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.390897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.390909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.390927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.390939 | orchestrator | 2025-05-31 16:31:34.390950 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-05-31 16:31:34.390961 | orchestrator | Saturday 31 May 2025 16:29:13 +0000 (0:00:06.485) 0:00:39.406 ********** 2025-05-31 16:31:34.390973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-31 16:31:34.390984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-31 16:31:34.391008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-31 16:31:34.391019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.391038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-31 16:31:34.391050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.391061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.391072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.391090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.391102 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:31:34.391118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.391130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.391148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.391160 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:31:34.391172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-31 16:31:34.391190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-31 16:31:34.391201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.391431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.391578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.391630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.391643 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:31:34.391655 | orchestrator | 2025-05-31 16:31:34.391667 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-05-31 16:31:34.391678 | orchestrator | Saturday 31 May 2025 16:29:14 +0000 (0:00:01.102) 0:00:40.509 ********** 2025-05-31 16:31:34.391690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-31 16:31:34.391957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-31 16:31:34.391975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.392000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.392010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.392033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.392045 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:31:34.392055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-31 16:31:34.392073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-31 16:31:34.392083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.392098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.392109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.392128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.392138 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:31:34.392149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-31 16:31:34.392166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-31 16:31:34.392176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.392192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.392203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.392222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.392232 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:31:34.392242 | orchestrator | 2025-05-31 16:31:34.392252 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-05-31 16:31:34.392262 | orchestrator | Saturday 31 May 2025 16:29:15 +0000 (0:00:01.495) 0:00:42.004 ********** 2025-05-31 16:31:34.392279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-31 16:31:34.392290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-31 16:31:34.392306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-31 16:31:34.392316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-31 16:31:34.392334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-31 16:31:34.392357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-31 16:31:34.392368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.392378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.392388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.392402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.392413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.392432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.393810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.393835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.393868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.393888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.393899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.393923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.393940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.393950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.393961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.393971 | orchestrator | 2025-05-31 16:31:34.393981 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-05-31 16:31:34.393992 | orchestrator | Saturday 31 May 2025 16:29:23 +0000 (0:00:07.293) 0:00:49.298 ********** 2025-05-31 16:31:34.394003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-31 16:31:34.394090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-31 16:31:34.394114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-31 16:31:34.394133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-31 16:31:34.394144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-31 16:31:34.394154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-31 16:31:34.394169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.394179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.394204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.394215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.394226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.394236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.394246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.394260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.394270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.394294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.394304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.394314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.394324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.394334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.394349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.394365 | orchestrator | 2025-05-31 16:31:34.394375 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-05-31 16:31:34.394385 | orchestrator | Saturday 31 May 2025 16:29:49 +0000 (0:00:26.589) 0:01:15.887 ********** 2025-05-31 16:31:34.394395 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-31 16:31:34.394407 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-31 16:31:34.394416 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-31 16:31:34.394426 | orchestrator | 2025-05-31 16:31:34.394435 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-05-31 16:31:34.394445 | orchestrator | Saturday 31 May 2025 16:29:56 +0000 (0:00:07.271) 0:01:23.159 ********** 2025-05-31 16:31:34.394454 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-31 16:31:34.394470 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-31 16:31:34.394480 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-31 16:31:34.394489 | orchestrator | 2025-05-31 16:31:34.394499 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-05-31 16:31:34.394509 | orchestrator | Saturday 31 May 2025 16:29:59 +0000 (0:00:02.712) 0:01:25.871 ********** 2025-05-31 16:31:34.394518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-31 16:31:34.394529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-31 16:31:34.394540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-31 16:31:34.394556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-31 16:31:34.394575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.394585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.394595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.394605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-31 16:31:34.394662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.394689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.394699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.395121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-31 16:31:34.395142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.395152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.395162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.395172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.395196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.395207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.395225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.395235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.395245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.395255 | orchestrator | 2025-05-31 16:31:34.395265 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-05-31 16:31:34.395274 | orchestrator | Saturday 31 May 2025 16:30:02 +0000 (0:00:03.172) 0:01:29.044 ********** 2025-05-31 16:31:34.395284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-31 16:31:34.395305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-31 16:31:34.395322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-31 16:31:34.395333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-31 16:31:34.395343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.395353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.395369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.395384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-31 16:31:34.395399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.395409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.395419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.395429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-31 16:31:34.395634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.395655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.395665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.395683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.395694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.395704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.395714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.395730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.395745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.395755 | orchestrator | 2025-05-31 16:31:34.395765 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-31 16:31:34.395775 | orchestrator | Saturday 31 May 2025 16:30:06 +0000 (0:00:03.461) 0:01:32.506 ********** 2025-05-31 16:31:34.395784 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:31:34.395794 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:31:34.395804 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:31:34.395813 | orchestrator | 2025-05-31 16:31:34.395823 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-05-31 16:31:34.395832 | orchestrator | Saturday 31 May 2025 16:30:06 +0000 (0:00:00.316) 0:01:32.822 ********** 2025-05-31 16:31:34.395875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-31 16:31:34.395887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-31 16:31:34.395897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.395914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.395930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.395941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.395955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.395965 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:31:34.395976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-31 16:31:34.395992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-31 16:31:34.396002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.396012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.396030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.396040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.396056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.396066 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:31:34.396076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-31 16:31:34.396092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-31 16:31:34.396102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.396117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.396127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.396142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.396152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.396168 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:31:34.396178 | orchestrator | 2025-05-31 16:31:34.396188 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-05-31 16:31:34.396197 | orchestrator | Saturday 31 May 2025 16:30:08 +0000 (0:00:01.530) 0:01:34.353 ********** 2025-05-31 16:31:34.396207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-31 16:31:34.396222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-31 16:31:34.396233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-31 16:31:34.396252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-31 16:31:34.396270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-31 16:31:34.396281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-31 16:31:34.396292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.396309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.396320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.396337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.396348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.396365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.396377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.396388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.396404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.396415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.396433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.396451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.396462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.396473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-31 16:31:34.396484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-31 16:31:34.396495 | orchestrator | 2025-05-31 16:31:34.396506 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-31 16:31:34.396516 | orchestrator | Saturday 31 May 2025 16:30:14 +0000 (0:00:05.900) 0:01:40.254 ********** 2025-05-31 16:31:34.396527 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:31:34.396538 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:31:34.396549 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:31:34.396559 | orchestrator | 2025-05-31 16:31:34.396575 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-05-31 16:31:34.396586 | orchestrator | Saturday 31 May 2025 16:30:14 +0000 (0:00:00.384) 0:01:40.638 ********** 2025-05-31 16:31:34.396598 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-05-31 16:31:34.396607 | orchestrator | 2025-05-31 16:31:34.396617 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-05-31 16:31:34.396626 | orchestrator | Saturday 31 May 2025 16:30:16 +0000 (0:00:02.494) 0:01:43.132 ********** 2025-05-31 16:31:34.396636 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-31 16:31:34.396646 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-05-31 16:31:34.396655 | orchestrator | 2025-05-31 16:31:34.396665 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-05-31 16:31:34.396680 | orchestrator | Saturday 31 May 2025 16:30:19 +0000 (0:00:02.593) 0:01:45.726 ********** 2025-05-31 16:31:34.396689 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:31:34.396699 | orchestrator | 2025-05-31 16:31:34.396708 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-31 16:31:34.396718 | orchestrator | Saturday 31 May 2025 16:30:35 +0000 (0:00:15.694) 0:02:01.420 ********** 2025-05-31 16:31:34.396727 | orchestrator | 2025-05-31 16:31:34.396738 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-31 16:31:34.396747 | orchestrator | Saturday 31 May 2025 16:30:35 +0000 (0:00:00.056) 0:02:01.476 ********** 2025-05-31 16:31:34.396757 | orchestrator | 2025-05-31 16:31:34.396766 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-31 16:31:34.396781 | orchestrator | Saturday 31 May 2025 16:30:35 +0000 (0:00:00.056) 0:02:01.533 ********** 2025-05-31 16:31:34.396791 | orchestrator | 2025-05-31 16:31:34.396800 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-05-31 16:31:34.396810 | orchestrator | Saturday 31 May 2025 16:30:35 +0000 (0:00:00.064) 0:02:01.598 ********** 2025-05-31 16:31:34.396819 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:31:34.396828 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:31:34.396838 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:31:34.396862 | orchestrator | 2025-05-31 16:31:34.396872 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-05-31 16:31:34.396881 | orchestrator | Saturday 31 May 2025 16:30:42 +0000 (0:00:07.383) 0:02:08.981 ********** 2025-05-31 16:31:34.396891 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:31:34.396900 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:31:34.396910 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:31:34.396919 | orchestrator | 2025-05-31 16:31:34.396929 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-05-31 16:31:34.396938 | orchestrator | Saturday 31 May 2025 16:30:49 +0000 (0:00:06.517) 0:02:15.499 ********** 2025-05-31 16:31:34.396947 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:31:34.396957 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:31:34.396966 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:31:34.396976 | orchestrator | 2025-05-31 16:31:34.396985 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-05-31 16:31:34.396995 | orchestrator | Saturday 31 May 2025 16:31:00 +0000 (0:00:11.410) 0:02:26.909 ********** 2025-05-31 16:31:34.397004 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:31:34.397014 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:31:34.397023 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:31:34.397032 | orchestrator | 2025-05-31 16:31:34.397042 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-05-31 16:31:34.397051 | orchestrator | Saturday 31 May 2025 16:31:06 +0000 (0:00:06.229) 0:02:33.139 ********** 2025-05-31 16:31:34.397061 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:31:34.397071 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:31:34.397080 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:31:34.397090 | orchestrator | 2025-05-31 16:31:34.397099 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-05-31 16:31:34.397109 | orchestrator | Saturday 31 May 2025 16:31:16 +0000 (0:00:10.061) 0:02:43.200 ********** 2025-05-31 16:31:34.397118 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:31:34.397128 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:31:34.397137 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:31:34.397146 | orchestrator | 2025-05-31 16:31:34.397156 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-05-31 16:31:34.397165 | orchestrator | Saturday 31 May 2025 16:31:28 +0000 (0:00:11.015) 0:02:54.215 ********** 2025-05-31 16:31:34.397175 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:31:34.397184 | orchestrator | 2025-05-31 16:31:34.397194 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:31:34.397211 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-31 16:31:34.397222 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-31 16:31:34.397231 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-31 16:31:34.397241 | orchestrator | 2025-05-31 16:31:34.397250 | orchestrator | 2025-05-31 16:31:34.397260 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 16:31:34.397269 | orchestrator | Saturday 31 May 2025 16:31:33 +0000 (0:00:05.140) 0:02:59.356 ********** 2025-05-31 16:31:34.397279 | orchestrator | =============================================================================== 2025-05-31 16:31:34.397288 | orchestrator | designate : Copying over designate.conf -------------------------------- 26.59s 2025-05-31 16:31:34.397298 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.69s 2025-05-31 16:31:34.397307 | orchestrator | designate : Restart designate-central container ------------------------ 11.41s 2025-05-31 16:31:34.397321 | orchestrator | designate : Restart designate-worker container ------------------------- 11.02s 2025-05-31 16:31:34.397331 | orchestrator | designate : Restart designate-mdns container --------------------------- 10.06s 2025-05-31 16:31:34.397340 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 7.38s 2025-05-31 16:31:34.397350 | orchestrator | designate : Copying over config.json files for services ----------------- 7.29s 2025-05-31 16:31:34.397359 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 7.27s 2025-05-31 16:31:34.397368 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.08s 2025-05-31 16:31:34.397378 | orchestrator | designate : Restart designate-api container ----------------------------- 6.52s 2025-05-31 16:31:34.397387 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.49s 2025-05-31 16:31:34.397397 | orchestrator | designate : Restart designate-producer container ------------------------ 6.23s 2025-05-31 16:31:34.397406 | orchestrator | designate : Check designate containers ---------------------------------- 5.90s 2025-05-31 16:31:34.397415 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 5.14s 2025-05-31 16:31:34.397425 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.34s 2025-05-31 16:31:34.397434 | orchestrator | service-ks-register : designate | Creating services --------------------- 4.18s 2025-05-31 16:31:34.397444 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.09s 2025-05-31 16:31:34.397458 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.66s 2025-05-31 16:31:34.397468 | orchestrator | designate : Copying over rndc.key --------------------------------------- 3.46s 2025-05-31 16:31:34.397477 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.34s 2025-05-31 16:31:34.397487 | orchestrator | 2025-05-31 16:31:34 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:31:34.397497 | orchestrator | 2025-05-31 16:31:34 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:31:37.448687 | orchestrator | 2025-05-31 16:31:37 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:31:37.450518 | orchestrator | 2025-05-31 16:31:37 | INFO  | Task cb3ea6d6-2fb0-4f81-bfef-9f0e8321bafd is in state STARTED 2025-05-31 16:31:37.453686 | orchestrator | 2025-05-31 16:31:37 | INFO  | Task 6a61a51e-968a-411a-abe1-a33c6e8c30ea is in state STARTED 2025-05-31 16:31:37.455833 | orchestrator | 2025-05-31 16:31:37 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:31:37.458586 | orchestrator | 2025-05-31 16:31:37 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:31:37.458654 | orchestrator | 2025-05-31 16:31:37 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:31:40.514186 | orchestrator | 2025-05-31 16:31:40 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:31:40.515144 | orchestrator | 2025-05-31 16:31:40 | INFO  | Task cb3ea6d6-2fb0-4f81-bfef-9f0e8321bafd is in state STARTED 2025-05-31 16:31:40.515778 | orchestrator | 2025-05-31 16:31:40 | INFO  | Task 6a61a51e-968a-411a-abe1-a33c6e8c30ea is in state STARTED 2025-05-31 16:31:40.519632 | orchestrator | 2025-05-31 16:31:40 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:31:40.520730 | orchestrator | 2025-05-31 16:31:40 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:31:40.521210 | orchestrator | 2025-05-31 16:31:40 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:31:43.571513 | orchestrator | 2025-05-31 16:31:43 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:31:43.574078 | orchestrator | 2025-05-31 16:31:43 | INFO  | Task cb3ea6d6-2fb0-4f81-bfef-9f0e8321bafd is in state STARTED 2025-05-31 16:31:43.575902 | orchestrator | 2025-05-31 16:31:43 | INFO  | Task 6a61a51e-968a-411a-abe1-a33c6e8c30ea is in state STARTED 2025-05-31 16:31:43.577520 | orchestrator | 2025-05-31 16:31:43 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:31:43.578927 | orchestrator | 2025-05-31 16:31:43 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:31:43.578958 | orchestrator | 2025-05-31 16:31:43 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:31:46.625466 | orchestrator | 2025-05-31 16:31:46 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:31:46.626169 | orchestrator | 2025-05-31 16:31:46 | INFO  | Task cb3ea6d6-2fb0-4f81-bfef-9f0e8321bafd is in state STARTED 2025-05-31 16:31:46.627526 | orchestrator | 2025-05-31 16:31:46 | INFO  | Task 6a61a51e-968a-411a-abe1-a33c6e8c30ea is in state STARTED 2025-05-31 16:31:46.628587 | orchestrator | 2025-05-31 16:31:46 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:31:46.630065 | orchestrator | 2025-05-31 16:31:46 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:31:46.630097 | orchestrator | 2025-05-31 16:31:46 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:31:49.687223 | orchestrator | 2025-05-31 16:31:49 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:31:49.690121 | orchestrator | 2025-05-31 16:31:49 | INFO  | Task cb3ea6d6-2fb0-4f81-bfef-9f0e8321bafd is in state STARTED 2025-05-31 16:31:49.691670 | orchestrator | 2025-05-31 16:31:49 | INFO  | Task 6a61a51e-968a-411a-abe1-a33c6e8c30ea is in state STARTED 2025-05-31 16:31:49.694363 | orchestrator | 2025-05-31 16:31:49 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:31:49.696830 | orchestrator | 2025-05-31 16:31:49 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:31:49.696903 | orchestrator | 2025-05-31 16:31:49 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:31:52.748496 | orchestrator | 2025-05-31 16:31:52 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:31:52.748593 | orchestrator | 2025-05-31 16:31:52 | INFO  | Task cb3ea6d6-2fb0-4f81-bfef-9f0e8321bafd is in state STARTED 2025-05-31 16:31:52.748606 | orchestrator | 2025-05-31 16:31:52 | INFO  | Task 6a61a51e-968a-411a-abe1-a33c6e8c30ea is in state STARTED 2025-05-31 16:31:52.748649 | orchestrator | 2025-05-31 16:31:52 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:31:52.748660 | orchestrator | 2025-05-31 16:31:52 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:31:52.748671 | orchestrator | 2025-05-31 16:31:52 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:31:55.787165 | orchestrator | 2025-05-31 16:31:55 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:31:55.787770 | orchestrator | 2025-05-31 16:31:55 | INFO  | Task cb3ea6d6-2fb0-4f81-bfef-9f0e8321bafd is in state STARTED 2025-05-31 16:31:55.788020 | orchestrator | 2025-05-31 16:31:55 | INFO  | Task 6a61a51e-968a-411a-abe1-a33c6e8c30ea is in state SUCCESS 2025-05-31 16:31:55.789580 | orchestrator | 2025-05-31 16:31:55.789618 | orchestrator | 2025-05-31 16:31:55.789631 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-31 16:31:55.789644 | orchestrator | 2025-05-31 16:31:55.789656 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-31 16:31:55.789668 | orchestrator | Saturday 31 May 2025 16:30:44 +0000 (0:00:00.259) 0:00:00.259 ********** 2025-05-31 16:31:55.789680 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:31:55.789693 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:31:55.789704 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:31:55.789716 | orchestrator | 2025-05-31 16:31:55.789728 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-31 16:31:55.789739 | orchestrator | Saturday 31 May 2025 16:30:45 +0000 (0:00:00.720) 0:00:00.979 ********** 2025-05-31 16:31:55.789751 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-05-31 16:31:55.789763 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-05-31 16:31:55.789774 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-05-31 16:31:55.789786 | orchestrator | 2025-05-31 16:31:55.789797 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-05-31 16:31:55.789808 | orchestrator | 2025-05-31 16:31:55.789819 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-31 16:31:55.789831 | orchestrator | Saturday 31 May 2025 16:30:46 +0000 (0:00:00.489) 0:00:01.469 ********** 2025-05-31 16:31:55.789842 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:31:55.789896 | orchestrator | 2025-05-31 16:31:55.789907 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-05-31 16:31:55.789918 | orchestrator | Saturday 31 May 2025 16:30:46 +0000 (0:00:00.567) 0:00:02.037 ********** 2025-05-31 16:31:55.789929 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-05-31 16:31:55.789940 | orchestrator | 2025-05-31 16:31:55.789951 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-05-31 16:31:55.789962 | orchestrator | Saturday 31 May 2025 16:30:50 +0000 (0:00:03.493) 0:00:05.530 ********** 2025-05-31 16:31:55.789972 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-05-31 16:31:55.789984 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-05-31 16:31:55.789995 | orchestrator | 2025-05-31 16:31:55.790006 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-05-31 16:31:55.790188 | orchestrator | Saturday 31 May 2025 16:30:56 +0000 (0:00:06.860) 0:00:12.391 ********** 2025-05-31 16:31:55.790210 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-31 16:31:55.790221 | orchestrator | 2025-05-31 16:31:55.790232 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-05-31 16:31:55.790242 | orchestrator | Saturday 31 May 2025 16:31:00 +0000 (0:00:03.682) 0:00:16.073 ********** 2025-05-31 16:31:55.790253 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-31 16:31:55.790305 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-05-31 16:31:55.790318 | orchestrator | 2025-05-31 16:31:55.790329 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-05-31 16:31:55.790339 | orchestrator | Saturday 31 May 2025 16:31:04 +0000 (0:00:04.064) 0:00:20.138 ********** 2025-05-31 16:31:55.790351 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-31 16:31:55.790362 | orchestrator | 2025-05-31 16:31:55.790372 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-05-31 16:31:55.790383 | orchestrator | Saturday 31 May 2025 16:31:08 +0000 (0:00:03.519) 0:00:23.657 ********** 2025-05-31 16:31:55.790394 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-05-31 16:31:55.790405 | orchestrator | 2025-05-31 16:31:55.790416 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-31 16:31:55.790426 | orchestrator | Saturday 31 May 2025 16:31:13 +0000 (0:00:04.789) 0:00:28.447 ********** 2025-05-31 16:31:55.790437 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:31:55.790448 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:31:55.790460 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:31:55.790471 | orchestrator | 2025-05-31 16:31:55.790481 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-05-31 16:31:55.790492 | orchestrator | Saturday 31 May 2025 16:31:13 +0000 (0:00:00.410) 0:00:28.857 ********** 2025-05-31 16:31:55.790508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-31 16:31:55.790543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-31 16:31:55.790556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-31 16:31:55.790578 | orchestrator | 2025-05-31 16:31:55.790590 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-05-31 16:31:55.790601 | orchestrator | Saturday 31 May 2025 16:31:14 +0000 (0:00:01.118) 0:00:29.976 ********** 2025-05-31 16:31:55.790612 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:31:55.790623 | orchestrator | 2025-05-31 16:31:55.790634 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-05-31 16:31:55.790645 | orchestrator | Saturday 31 May 2025 16:31:14 +0000 (0:00:00.243) 0:00:30.219 ********** 2025-05-31 16:31:55.790661 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:31:55.790673 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:31:55.790684 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:31:55.790694 | orchestrator | 2025-05-31 16:31:55.790706 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-31 16:31:55.790717 | orchestrator | Saturday 31 May 2025 16:31:15 +0000 (0:00:00.267) 0:00:30.487 ********** 2025-05-31 16:31:55.790728 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:31:55.790739 | orchestrator | 2025-05-31 16:31:55.790749 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-05-31 16:31:55.790760 | orchestrator | Saturday 31 May 2025 16:31:15 +0000 (0:00:00.740) 0:00:31.227 ********** 2025-05-31 16:31:55.790772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-31 16:31:55.790795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-31 16:31:55.790808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-31 16:31:55.790826 | orchestrator | 2025-05-31 16:31:55.790839 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-05-31 16:31:55.790878 | orchestrator | Saturday 31 May 2025 16:31:17 +0000 (0:00:01.680) 0:00:32.908 ********** 2025-05-31 16:31:55.790897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-31 16:31:55.790910 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:31:55.790923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-31 16:31:55.790935 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:31:55.790956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-31 16:31:55.790969 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:31:55.790981 | orchestrator | 2025-05-31 16:31:55.790993 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-05-31 16:31:55.791005 | orchestrator | Saturday 31 May 2025 16:31:18 +0000 (0:00:00.738) 0:00:33.646 ********** 2025-05-31 16:31:55.791018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-31 16:31:55.791037 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:31:55.791055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-31 16:31:55.791068 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:31:55.791081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-31 16:31:55.791093 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:31:55.791105 | orchestrator | 2025-05-31 16:31:55.791117 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-05-31 16:31:55.791129 | orchestrator | Saturday 31 May 2025 16:31:19 +0000 (0:00:01.405) 0:00:35.051 ********** 2025-05-31 16:31:55.791151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-31 16:31:55.791171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-31 16:31:55.791184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-31 16:31:55.791195 | orchestrator | 2025-05-31 16:31:55.791219 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-05-31 16:31:55.791230 | orchestrator | Saturday 31 May 2025 16:31:21 +0000 (0:00:01.776) 0:00:36.827 ********** 2025-05-31 16:31:55.791241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-31 16:31:55.791253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-31 16:31:55.791272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-31 16:31:55.791290 | orchestrator | 2025-05-31 16:31:55.791301 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-05-31 16:31:55.791312 | orchestrator | Saturday 31 May 2025 16:31:23 +0000 (0:00:02.382) 0:00:39.210 ********** 2025-05-31 16:31:55.791323 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-31 16:31:55.791334 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-31 16:31:55.791345 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-31 16:31:55.791356 | orchestrator | 2025-05-31 16:31:55.791367 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-05-31 16:31:55.791377 | orchestrator | Saturday 31 May 2025 16:31:25 +0000 (0:00:01.872) 0:00:41.083 ********** 2025-05-31 16:31:55.791388 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:31:55.791399 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:31:55.791410 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:31:55.791420 | orchestrator | 2025-05-31 16:31:55.791431 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-05-31 16:31:55.791442 | orchestrator | Saturday 31 May 2025 16:31:27 +0000 (0:00:01.658) 0:00:42.742 ********** 2025-05-31 16:31:55.791457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-31 16:31:55.791469 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:31:55.791480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-31 16:31:55.791497 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:31:55.791516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-31 16:31:55.791528 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:31:55.791538 | orchestrator | 2025-05-31 16:31:55.791549 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-05-31 16:31:55.791560 | orchestrator | Saturday 31 May 2025 16:31:28 +0000 (0:00:00.750) 0:00:43.492 ********** 2025-05-31 16:31:55.791571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-31 16:31:55.791587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-31 16:31:55.791599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-31 16:31:55.791617 | orchestrator | 2025-05-31 16:31:55.791628 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-05-31 16:31:55.791639 | orchestrator | Saturday 31 May 2025 16:31:29 +0000 (0:00:01.425) 0:00:44.917 ********** 2025-05-31 16:31:55.791650 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:31:55.791660 | orchestrator | 2025-05-31 16:31:55.791671 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-05-31 16:31:55.791682 | orchestrator | Saturday 31 May 2025 16:31:32 +0000 (0:00:02.584) 0:00:47.501 ********** 2025-05-31 16:31:55.791692 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:31:55.791703 | orchestrator | 2025-05-31 16:31:55.791713 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-05-31 16:31:55.791724 | orchestrator | Saturday 31 May 2025 16:31:34 +0000 (0:00:02.486) 0:00:49.988 ********** 2025-05-31 16:31:55.791741 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:31:55.791752 | orchestrator | 2025-05-31 16:31:55.791763 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-31 16:31:55.791773 | orchestrator | Saturday 31 May 2025 16:31:48 +0000 (0:00:13.704) 0:01:03.693 ********** 2025-05-31 16:31:55.791784 | orchestrator | 2025-05-31 16:31:55.791794 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-31 16:31:55.791805 | orchestrator | Saturday 31 May 2025 16:31:48 +0000 (0:00:00.061) 0:01:03.754 ********** 2025-05-31 16:31:55.791816 | orchestrator | 2025-05-31 16:31:55.791826 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-31 16:31:55.791837 | orchestrator | Saturday 31 May 2025 16:31:48 +0000 (0:00:00.165) 0:01:03.919 ********** 2025-05-31 16:31:55.791865 | orchestrator | 2025-05-31 16:31:55.791877 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-05-31 16:31:55.791888 | orchestrator | Saturday 31 May 2025 16:31:48 +0000 (0:00:00.054) 0:01:03.974 ********** 2025-05-31 16:31:55.791898 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:31:55.791909 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:31:55.791920 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:31:55.791931 | orchestrator | 2025-05-31 16:31:55.791941 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:31:55.791953 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-31 16:31:55.791966 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-31 16:31:55.791977 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-31 16:31:55.791988 | orchestrator | 2025-05-31 16:31:55.791998 | orchestrator | 2025-05-31 16:31:55.792009 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 16:31:55.792020 | orchestrator | Saturday 31 May 2025 16:31:53 +0000 (0:00:04.854) 0:01:08.829 ********** 2025-05-31 16:31:55.792031 | orchestrator | =============================================================================== 2025-05-31 16:31:55.792041 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.70s 2025-05-31 16:31:55.792052 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.86s 2025-05-31 16:31:55.792063 | orchestrator | placement : Restart placement-api container ----------------------------- 4.85s 2025-05-31 16:31:55.792073 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.79s 2025-05-31 16:31:55.792084 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.06s 2025-05-31 16:31:55.792094 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.68s 2025-05-31 16:31:55.792110 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.52s 2025-05-31 16:31:55.792129 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.50s 2025-05-31 16:31:55.792139 | orchestrator | placement : Creating placement databases -------------------------------- 2.58s 2025-05-31 16:31:55.792150 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.49s 2025-05-31 16:31:55.792160 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.38s 2025-05-31 16:31:55.792171 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.87s 2025-05-31 16:31:55.792181 | orchestrator | placement : Copying over config.json files for services ----------------- 1.78s 2025-05-31 16:31:55.792192 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.68s 2025-05-31 16:31:55.792202 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.66s 2025-05-31 16:31:55.792213 | orchestrator | placement : Check placement containers ---------------------------------- 1.43s 2025-05-31 16:31:55.792223 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 1.41s 2025-05-31 16:31:55.792234 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.12s 2025-05-31 16:31:55.792245 | orchestrator | placement : Copying over existing policy file --------------------------- 0.75s 2025-05-31 16:31:55.792255 | orchestrator | placement : include_tasks ----------------------------------------------- 0.74s 2025-05-31 16:31:55.792266 | orchestrator | 2025-05-31 16:31:55 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:31:55.792276 | orchestrator | 2025-05-31 16:31:55 | INFO  | Task 1e6c804f-722f-45b1-99fd-eefadbeee243 is in state STARTED 2025-05-31 16:31:55.792287 | orchestrator | 2025-05-31 16:31:55 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:31:55.792298 | orchestrator | 2025-05-31 16:31:55 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:31:58.814898 | orchestrator | 2025-05-31 16:31:58 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:31:58.814968 | orchestrator | 2025-05-31 16:31:58 | INFO  | Task cb3ea6d6-2fb0-4f81-bfef-9f0e8321bafd is in state STARTED 2025-05-31 16:31:58.815430 | orchestrator | 2025-05-31 16:31:58 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:31:58.815721 | orchestrator | 2025-05-31 16:31:58 | INFO  | Task 1e6c804f-722f-45b1-99fd-eefadbeee243 is in state SUCCESS 2025-05-31 16:31:58.816260 | orchestrator | 2025-05-31 16:31:58 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:31:58.816283 | orchestrator | 2025-05-31 16:31:58 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:32:01.864756 | orchestrator | 2025-05-31 16:32:01 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:32:01.864838 | orchestrator | 2025-05-31 16:32:01 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:32:01.864965 | orchestrator | 2025-05-31 16:32:01 | INFO  | Task cb3ea6d6-2fb0-4f81-bfef-9f0e8321bafd is in state STARTED 2025-05-31 16:32:01.864993 | orchestrator | 2025-05-31 16:32:01 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:32:01.866468 | orchestrator | 2025-05-31 16:32:01 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:32:01.866492 | orchestrator | 2025-05-31 16:32:01 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:32:04.906248 | orchestrator | 2025-05-31 16:32:04 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:32:04.908409 | orchestrator | 2025-05-31 16:32:04 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:32:04.909644 | orchestrator | 2025-05-31 16:32:04 | INFO  | Task cb3ea6d6-2fb0-4f81-bfef-9f0e8321bafd is in state STARTED 2025-05-31 16:32:04.910405 | orchestrator | 2025-05-31 16:32:04 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:32:04.912179 | orchestrator | 2025-05-31 16:32:04 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:32:04.912312 | orchestrator | 2025-05-31 16:32:04 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:32:07.953154 | orchestrator | 2025-05-31 16:32:07 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:32:07.954201 | orchestrator | 2025-05-31 16:32:07 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:32:07.956022 | orchestrator | 2025-05-31 16:32:07 | INFO  | Task cb3ea6d6-2fb0-4f81-bfef-9f0e8321bafd is in state STARTED 2025-05-31 16:32:07.957209 | orchestrator | 2025-05-31 16:32:07 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:32:07.958991 | orchestrator | 2025-05-31 16:32:07 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:32:07.959034 | orchestrator | 2025-05-31 16:32:07 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:32:11.005222 | orchestrator | 2025-05-31 16:32:11 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:32:11.014958 | orchestrator | 2025-05-31 16:32:11 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:32:11.016831 | orchestrator | 2025-05-31 16:32:11 | INFO  | Task cb3ea6d6-2fb0-4f81-bfef-9f0e8321bafd is in state STARTED 2025-05-31 16:32:11.019667 | orchestrator | 2025-05-31 16:32:11 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:32:11.020603 | orchestrator | 2025-05-31 16:32:11 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:32:11.020627 | orchestrator | 2025-05-31 16:32:11 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:32:14.080944 | orchestrator | 2025-05-31 16:32:14 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:32:14.085072 | orchestrator | 2025-05-31 16:32:14 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:32:14.085553 | orchestrator | 2025-05-31 16:32:14 | INFO  | Task cb3ea6d6-2fb0-4f81-bfef-9f0e8321bafd is in state STARTED 2025-05-31 16:32:14.086307 | orchestrator | 2025-05-31 16:32:14 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:32:14.087153 | orchestrator | 2025-05-31 16:32:14 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:32:14.087761 | orchestrator | 2025-05-31 16:32:14 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:32:17.127326 | orchestrator | 2025-05-31 16:32:17 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:32:17.128736 | orchestrator | 2025-05-31 16:32:17 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:32:17.129119 | orchestrator | 2025-05-31 16:32:17 | INFO  | Task cb3ea6d6-2fb0-4f81-bfef-9f0e8321bafd is in state STARTED 2025-05-31 16:32:17.131137 | orchestrator | 2025-05-31 16:32:17 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:32:17.131917 | orchestrator | 2025-05-31 16:32:17 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:32:17.131946 | orchestrator | 2025-05-31 16:32:17 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:32:20.157709 | orchestrator | 2025-05-31 16:32:20 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:32:20.157949 | orchestrator | 2025-05-31 16:32:20 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:32:20.159266 | orchestrator | 2025-05-31 16:32:20 | INFO  | Task cb3ea6d6-2fb0-4f81-bfef-9f0e8321bafd is in state STARTED 2025-05-31 16:32:20.159752 | orchestrator | 2025-05-31 16:32:20 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:32:20.160261 | orchestrator | 2025-05-31 16:32:20 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:32:20.160285 | orchestrator | 2025-05-31 16:32:20 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:32:23.188801 | orchestrator | 2025-05-31 16:32:23 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:32:23.188911 | orchestrator | 2025-05-31 16:32:23 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:32:23.189361 | orchestrator | 2025-05-31 16:32:23 | INFO  | Task cb3ea6d6-2fb0-4f81-bfef-9f0e8321bafd is in state STARTED 2025-05-31 16:32:23.189904 | orchestrator | 2025-05-31 16:32:23 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:32:23.190415 | orchestrator | 2025-05-31 16:32:23 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:32:23.190450 | orchestrator | 2025-05-31 16:32:23 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:32:26.235956 | orchestrator | 2025-05-31 16:32:26 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:32:26.236050 | orchestrator | 2025-05-31 16:32:26 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:32:26.236952 | orchestrator | 2025-05-31 16:32:26 | INFO  | Task cb3ea6d6-2fb0-4f81-bfef-9f0e8321bafd is in state STARTED 2025-05-31 16:32:26.239131 | orchestrator | 2025-05-31 16:32:26 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:32:26.240014 | orchestrator | 2025-05-31 16:32:26 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:32:26.240043 | orchestrator | 2025-05-31 16:32:26 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:32:29.276241 | orchestrator | 2025-05-31 16:32:29 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:32:29.276435 | orchestrator | 2025-05-31 16:32:29 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:32:29.277081 | orchestrator | 2025-05-31 16:32:29 | INFO  | Task cb3ea6d6-2fb0-4f81-bfef-9f0e8321bafd is in state STARTED 2025-05-31 16:32:29.277638 | orchestrator | 2025-05-31 16:32:29 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:32:29.278365 | orchestrator | 2025-05-31 16:32:29 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:32:29.278392 | orchestrator | 2025-05-31 16:32:29 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:32:32.313829 | orchestrator | 2025-05-31 16:32:32 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:32:32.313953 | orchestrator | 2025-05-31 16:32:32 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:32:32.314283 | orchestrator | 2025-05-31 16:32:32 | INFO  | Task cb3ea6d6-2fb0-4f81-bfef-9f0e8321bafd is in state STARTED 2025-05-31 16:32:32.314814 | orchestrator | 2025-05-31 16:32:32 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:32:32.315473 | orchestrator | 2025-05-31 16:32:32 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:32:32.315533 | orchestrator | 2025-05-31 16:32:32 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:32:35.340353 | orchestrator | 2025-05-31 16:32:35 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:32:35.340407 | orchestrator | 2025-05-31 16:32:35 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:32:35.340770 | orchestrator | 2025-05-31 16:32:35 | INFO  | Task cb3ea6d6-2fb0-4f81-bfef-9f0e8321bafd is in state STARTED 2025-05-31 16:32:35.342118 | orchestrator | 2025-05-31 16:32:35 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:32:35.342715 | orchestrator | 2025-05-31 16:32:35 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:32:35.342884 | orchestrator | 2025-05-31 16:32:35 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:32:38.383212 | orchestrator | 2025-05-31 16:32:38 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:32:38.383503 | orchestrator | 2025-05-31 16:32:38 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:32:38.384072 | orchestrator | 2025-05-31 16:32:38 | INFO  | Task cb3ea6d6-2fb0-4f81-bfef-9f0e8321bafd is in state STARTED 2025-05-31 16:32:38.384711 | orchestrator | 2025-05-31 16:32:38 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:32:38.388793 | orchestrator | 2025-05-31 16:32:38 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:32:38.388817 | orchestrator | 2025-05-31 16:32:38 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:32:41.416533 | orchestrator | 2025-05-31 16:32:41 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:32:41.416977 | orchestrator | 2025-05-31 16:32:41 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:32:41.417551 | orchestrator | 2025-05-31 16:32:41 | INFO  | Task cb3ea6d6-2fb0-4f81-bfef-9f0e8321bafd is in state STARTED 2025-05-31 16:32:41.418112 | orchestrator | 2025-05-31 16:32:41 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:32:41.418737 | orchestrator | 2025-05-31 16:32:41 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:32:41.418774 | orchestrator | 2025-05-31 16:32:41 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:32:44.435804 | orchestrator | 2025-05-31 16:32:44 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:32:44.437277 | orchestrator | 2025-05-31 16:32:44 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:32:44.439700 | orchestrator | 2025-05-31 16:32:44 | INFO  | Task cb3ea6d6-2fb0-4f81-bfef-9f0e8321bafd is in state STARTED 2025-05-31 16:32:44.440674 | orchestrator | 2025-05-31 16:32:44 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:32:44.441904 | orchestrator | 2025-05-31 16:32:44 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:32:44.442078 | orchestrator | 2025-05-31 16:32:44 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:32:47.495240 | orchestrator | 2025-05-31 16:32:47 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:32:47.496336 | orchestrator | 2025-05-31 16:32:47 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:32:47.497957 | orchestrator | 2025-05-31 16:32:47 | INFO  | Task cb3ea6d6-2fb0-4f81-bfef-9f0e8321bafd is in state STARTED 2025-05-31 16:32:47.499093 | orchestrator | 2025-05-31 16:32:47 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:32:47.500388 | orchestrator | 2025-05-31 16:32:47 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:32:47.500557 | orchestrator | 2025-05-31 16:32:47 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:32:50.548704 | orchestrator | 2025-05-31 16:32:50 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:32:50.549771 | orchestrator | 2025-05-31 16:32:50 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:32:50.551476 | orchestrator | 2025-05-31 16:32:50 | INFO  | Task cb3ea6d6-2fb0-4f81-bfef-9f0e8321bafd is in state STARTED 2025-05-31 16:32:50.553293 | orchestrator | 2025-05-31 16:32:50 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:32:50.555808 | orchestrator | 2025-05-31 16:32:50 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:32:50.556010 | orchestrator | 2025-05-31 16:32:50 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:32:53.621539 | orchestrator | 2025-05-31 16:32:53 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:32:53.623650 | orchestrator | 2025-05-31 16:32:53 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:32:53.625165 | orchestrator | 2025-05-31 16:32:53 | INFO  | Task cb3ea6d6-2fb0-4f81-bfef-9f0e8321bafd is in state STARTED 2025-05-31 16:32:53.626576 | orchestrator | 2025-05-31 16:32:53 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:32:53.627827 | orchestrator | 2025-05-31 16:32:53 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:32:53.627857 | orchestrator | 2025-05-31 16:32:53 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:32:56.693118 | orchestrator | 2025-05-31 16:32:56 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:32:56.694129 | orchestrator | 2025-05-31 16:32:56 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:32:56.695801 | orchestrator | 2025-05-31 16:32:56 | INFO  | Task cb3ea6d6-2fb0-4f81-bfef-9f0e8321bafd is in state STARTED 2025-05-31 16:32:56.697349 | orchestrator | 2025-05-31 16:32:56 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:32:56.700005 | orchestrator | 2025-05-31 16:32:56 | INFO  | Task 4ebc42e6-e7c9-43c3-afb0-2f57ee5926c6 is in state STARTED 2025-05-31 16:32:56.703050 | orchestrator | 2025-05-31 16:32:56 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:32:56.703496 | orchestrator | 2025-05-31 16:32:56 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:32:59.758362 | orchestrator | 2025-05-31 16:32:59 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:32:59.761504 | orchestrator | 2025-05-31 16:32:59 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:32:59.764495 | orchestrator | 2025-05-31 16:32:59 | INFO  | Task cb3ea6d6-2fb0-4f81-bfef-9f0e8321bafd is in state STARTED 2025-05-31 16:32:59.767725 | orchestrator | 2025-05-31 16:32:59 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:32:59.769694 | orchestrator | 2025-05-31 16:32:59 | INFO  | Task 4ebc42e6-e7c9-43c3-afb0-2f57ee5926c6 is in state STARTED 2025-05-31 16:32:59.771336 | orchestrator | 2025-05-31 16:32:59 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:32:59.771811 | orchestrator | 2025-05-31 16:32:59 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:33:02.823059 | orchestrator | 2025-05-31 16:33:02 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:33:02.823164 | orchestrator | 2025-05-31 16:33:02 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:33:02.823247 | orchestrator | 2025-05-31 16:33:02 | INFO  | Task cb3ea6d6-2fb0-4f81-bfef-9f0e8321bafd is in state STARTED 2025-05-31 16:33:02.824617 | orchestrator | 2025-05-31 16:33:02 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:33:02.825052 | orchestrator | 2025-05-31 16:33:02 | INFO  | Task 4ebc42e6-e7c9-43c3-afb0-2f57ee5926c6 is in state STARTED 2025-05-31 16:33:02.827744 | orchestrator | 2025-05-31 16:33:02 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state STARTED 2025-05-31 16:33:02.827839 | orchestrator | 2025-05-31 16:33:02 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:33:05.893239 | orchestrator | 2025-05-31 16:33:05 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:33:05.893293 | orchestrator | 2025-05-31 16:33:05 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:33:05.893300 | orchestrator | 2025-05-31 16:33:05 | INFO  | Task cb3ea6d6-2fb0-4f81-bfef-9f0e8321bafd is in state STARTED 2025-05-31 16:33:05.893749 | orchestrator | 2025-05-31 16:33:05 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:33:05.893977 | orchestrator | 2025-05-31 16:33:05 | INFO  | Task 4ebc42e6-e7c9-43c3-afb0-2f57ee5926c6 is in state SUCCESS 2025-05-31 16:33:05.901636 | orchestrator | 2025-05-31 16:33:05 | INFO  | Task 1a837330-d618-4cb1-a419-ee83b987aabc is in state SUCCESS 2025-05-31 16:33:05.903108 | orchestrator | 2025-05-31 16:33:05.903170 | orchestrator | 2025-05-31 16:33:05.903188 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-31 16:33:05.903206 | orchestrator | 2025-05-31 16:33:05.903267 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-31 16:33:05.903348 | orchestrator | Saturday 31 May 2025 16:31:56 +0000 (0:00:00.188) 0:00:00.188 ********** 2025-05-31 16:33:05.903366 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:33:05.903400 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:33:05.903415 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:33:05.903431 | orchestrator | 2025-05-31 16:33:05.903443 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-31 16:33:05.903451 | orchestrator | Saturday 31 May 2025 16:31:57 +0000 (0:00:00.342) 0:00:00.530 ********** 2025-05-31 16:33:05.903460 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-05-31 16:33:05.903469 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-05-31 16:33:05.903478 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-05-31 16:33:05.903487 | orchestrator | 2025-05-31 16:33:05.903495 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-05-31 16:33:05.903504 | orchestrator | 2025-05-31 16:33:05.903512 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-05-31 16:33:05.903521 | orchestrator | Saturday 31 May 2025 16:31:57 +0000 (0:00:00.501) 0:00:01.032 ********** 2025-05-31 16:33:05.903530 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:33:05.903538 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:33:05.903547 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:33:05.903555 | orchestrator | 2025-05-31 16:33:05.903564 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:33:05.903573 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:33:05.903632 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:33:05.903661 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:33:05.903670 | orchestrator | 2025-05-31 16:33:05.903678 | orchestrator | 2025-05-31 16:33:05.903687 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 16:33:05.903716 | orchestrator | Saturday 31 May 2025 16:31:58 +0000 (0:00:00.721) 0:00:01.753 ********** 2025-05-31 16:33:05.903725 | orchestrator | =============================================================================== 2025-05-31 16:33:05.903734 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.72s 2025-05-31 16:33:05.903742 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.50s 2025-05-31 16:33:05.903751 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2025-05-31 16:33:05.903759 | orchestrator | 2025-05-31 16:33:05.903768 | orchestrator | None 2025-05-31 16:33:05.903777 | orchestrator | 2025-05-31 16:33:05.903785 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-31 16:33:05.903794 | orchestrator | 2025-05-31 16:33:05.903802 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-31 16:33:05.903827 | orchestrator | Saturday 31 May 2025 16:28:33 +0000 (0:00:00.252) 0:00:00.252 ********** 2025-05-31 16:33:05.903837 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:33:05.903845 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:33:05.903854 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:33:05.903862 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:33:05.903871 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:33:05.903899 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:33:05.903914 | orchestrator | 2025-05-31 16:33:05.903930 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-31 16:33:05.903950 | orchestrator | Saturday 31 May 2025 16:28:34 +0000 (0:00:00.664) 0:00:00.916 ********** 2025-05-31 16:33:05.903959 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-05-31 16:33:05.903967 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-05-31 16:33:05.903976 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-05-31 16:33:05.903985 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-05-31 16:33:05.903993 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-05-31 16:33:05.904001 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-05-31 16:33:05.904010 | orchestrator | 2025-05-31 16:33:05.904018 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-05-31 16:33:05.904027 | orchestrator | 2025-05-31 16:33:05.904035 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-31 16:33:05.904044 | orchestrator | Saturday 31 May 2025 16:28:35 +0000 (0:00:00.925) 0:00:01.842 ********** 2025-05-31 16:33:05.904053 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:33:05.904062 | orchestrator | 2025-05-31 16:33:05.904070 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-05-31 16:33:05.904079 | orchestrator | Saturday 31 May 2025 16:28:36 +0000 (0:00:00.995) 0:00:02.837 ********** 2025-05-31 16:33:05.904087 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:33:05.904096 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:33:05.904104 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:33:05.904113 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:33:05.904121 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:33:05.904130 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:33:05.904138 | orchestrator | 2025-05-31 16:33:05.904146 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-05-31 16:33:05.904155 | orchestrator | Saturday 31 May 2025 16:28:37 +0000 (0:00:01.300) 0:00:04.137 ********** 2025-05-31 16:33:05.904171 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:33:05.904179 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:33:05.904189 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:33:05.904200 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:33:05.904210 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:33:05.904235 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:33:05.904246 | orchestrator | 2025-05-31 16:33:05.904257 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-05-31 16:33:05.904268 | orchestrator | Saturday 31 May 2025 16:28:38 +0000 (0:00:01.111) 0:00:05.249 ********** 2025-05-31 16:33:05.904279 | orchestrator | ok: [testbed-node-0] => { 2025-05-31 16:33:05.904290 | orchestrator |  "changed": false, 2025-05-31 16:33:05.904300 | orchestrator |  "msg": "All assertions passed" 2025-05-31 16:33:05.904311 | orchestrator | } 2025-05-31 16:33:05.904322 | orchestrator | ok: [testbed-node-1] => { 2025-05-31 16:33:05.904332 | orchestrator |  "changed": false, 2025-05-31 16:33:05.904343 | orchestrator |  "msg": "All assertions passed" 2025-05-31 16:33:05.904353 | orchestrator | } 2025-05-31 16:33:05.904364 | orchestrator | ok: [testbed-node-2] => { 2025-05-31 16:33:05.904374 | orchestrator |  "changed": false, 2025-05-31 16:33:05.904385 | orchestrator |  "msg": "All assertions passed" 2025-05-31 16:33:05.904395 | orchestrator | } 2025-05-31 16:33:05.904406 | orchestrator | ok: [testbed-node-3] => { 2025-05-31 16:33:05.904416 | orchestrator |  "changed": false, 2025-05-31 16:33:05.904426 | orchestrator |  "msg": "All assertions passed" 2025-05-31 16:33:05.904437 | orchestrator | } 2025-05-31 16:33:05.904447 | orchestrator | ok: [testbed-node-4] => { 2025-05-31 16:33:05.904458 | orchestrator |  "changed": false, 2025-05-31 16:33:05.904468 | orchestrator |  "msg": "All assertions passed" 2025-05-31 16:33:05.904479 | orchestrator | } 2025-05-31 16:33:05.904489 | orchestrator | ok: [testbed-node-5] => { 2025-05-31 16:33:05.904500 | orchestrator |  "changed": false, 2025-05-31 16:33:05.904510 | orchestrator |  "msg": "All assertions passed" 2025-05-31 16:33:05.904521 | orchestrator | } 2025-05-31 16:33:05.904531 | orchestrator | 2025-05-31 16:33:05.904542 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-05-31 16:33:05.904552 | orchestrator | Saturday 31 May 2025 16:28:39 +0000 (0:00:00.536) 0:00:05.786 ********** 2025-05-31 16:33:05.904563 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:33:05.904573 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:33:05.904583 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:33:05.904594 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:33:05.904604 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:33:05.904615 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:33:05.904625 | orchestrator | 2025-05-31 16:33:05.904636 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-05-31 16:33:05.904647 | orchestrator | Saturday 31 May 2025 16:28:39 +0000 (0:00:00.611) 0:00:06.397 ********** 2025-05-31 16:33:05.904657 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-05-31 16:33:05.904668 | orchestrator | 2025-05-31 16:33:05.904678 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-05-31 16:33:05.904689 | orchestrator | Saturday 31 May 2025 16:28:43 +0000 (0:00:03.826) 0:00:10.224 ********** 2025-05-31 16:33:05.904699 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-05-31 16:33:05.904711 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-05-31 16:33:05.904721 | orchestrator | 2025-05-31 16:33:05.904732 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-05-31 16:33:05.904742 | orchestrator | Saturday 31 May 2025 16:28:50 +0000 (0:00:06.955) 0:00:17.179 ********** 2025-05-31 16:33:05.904753 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-31 16:33:05.904764 | orchestrator | 2025-05-31 16:33:05.904774 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-05-31 16:33:05.904791 | orchestrator | Saturday 31 May 2025 16:28:54 +0000 (0:00:03.534) 0:00:20.713 ********** 2025-05-31 16:33:05.904801 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-31 16:33:05.904811 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-05-31 16:33:05.904822 | orchestrator | 2025-05-31 16:33:05.904833 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-05-31 16:33:05.904848 | orchestrator | Saturday 31 May 2025 16:28:58 +0000 (0:00:04.251) 0:00:24.965 ********** 2025-05-31 16:33:05.904859 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-31 16:33:05.904870 | orchestrator | 2025-05-31 16:33:05.904905 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-05-31 16:33:05.904917 | orchestrator | Saturday 31 May 2025 16:29:01 +0000 (0:00:03.221) 0:00:28.186 ********** 2025-05-31 16:33:05.904927 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-05-31 16:33:05.904938 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-05-31 16:33:05.904948 | orchestrator | 2025-05-31 16:33:05.904959 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-31 16:33:05.904969 | orchestrator | Saturday 31 May 2025 16:29:10 +0000 (0:00:08.635) 0:00:36.822 ********** 2025-05-31 16:33:05.904980 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:33:05.904990 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:33:05.905001 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:33:05.905011 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:33:05.905022 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:33:05.905032 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:33:05.905043 | orchestrator | 2025-05-31 16:33:05.905054 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-05-31 16:33:05.905064 | orchestrator | Saturday 31 May 2025 16:29:10 +0000 (0:00:00.752) 0:00:37.575 ********** 2025-05-31 16:33:05.905075 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:33:05.905085 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:33:05.905096 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:33:05.905106 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:33:05.905116 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:33:05.905127 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:33:05.905137 | orchestrator | 2025-05-31 16:33:05.905148 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-05-31 16:33:05.905158 | orchestrator | Saturday 31 May 2025 16:29:13 +0000 (0:00:02.995) 0:00:40.570 ********** 2025-05-31 16:33:05.905169 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:33:05.905179 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:33:05.905190 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:33:05.905201 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:33:05.905211 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:33:05.905229 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:33:05.905240 | orchestrator | 2025-05-31 16:33:05.905250 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-05-31 16:33:05.905261 | orchestrator | Saturday 31 May 2025 16:29:15 +0000 (0:00:01.424) 0:00:41.994 ********** 2025-05-31 16:33:05.905272 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:33:05.905282 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:33:05.905293 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:33:05.905304 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:33:05.905314 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:33:05.905324 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:33:05.905335 | orchestrator | 2025-05-31 16:33:05.905346 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-05-31 16:33:05.905356 | orchestrator | Saturday 31 May 2025 16:29:18 +0000 (0:00:03.139) 0:00:45.134 ********** 2025-05-31 16:33:05.905370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-31 16:33:05.905391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.905408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.905421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.905441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.905453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.905471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.905483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.905499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.905511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.905529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.905541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.905558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.905570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.905585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.905600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.905617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.905629 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.905648 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.905659 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.905675 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.905686 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.905704 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.905722 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.905734 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.905745 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.905765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-31 16:33:05.905777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.905795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.905812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.905823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.905839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.905851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.905862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.906480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.906518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.906529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.906540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.906967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.906988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.907031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.907148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.907164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.907179 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.907190 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.907312 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.907339 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.907349 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.907362 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.907386 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.907406 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.907458 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.907471 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.907481 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.907495 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.907506 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.907516 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.907554 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.907566 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.907578 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.908081 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.908114 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-31 16:33:05.908134 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.908160 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.908201 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.908213 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.908223 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.908239 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.908250 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.908292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-31 16:33:05.908303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.908313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.908323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.908337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.908353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.908384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.908447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.909145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.909160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.909177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.909198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.909208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.909232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.909243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.909254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.909268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.909285 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-31 16:33:05.909319 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.909330 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.909341 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.909350 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.909365 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.909381 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.909396 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.909407 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-31 16:33:05.909428 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.909438 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.909468 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.909479 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.909515 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.909527 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.909536 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.909546 | orchestrator | 2025-05-31 16:33:05.909590 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-05-31 16:33:05.909600 | orchestrator | Saturday 31 May 2025 16:29:21 +0000 (0:00:03.057) 0:00:48.192 ********** 2025-05-31 16:33:05.909610 | orchestrator | [WARNING]: Skipped 2025-05-31 16:33:05.909625 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-05-31 16:33:05.909635 | orchestrator | due to this access issue: 2025-05-31 16:33:05.909645 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-05-31 16:33:05.909655 | orchestrator | a directory 2025-05-31 16:33:05.909664 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-31 16:33:05.909674 | orchestrator | 2025-05-31 16:33:05.909683 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-31 16:33:05.909693 | orchestrator | Saturday 31 May 2025 16:29:22 +0000 (0:00:00.599) 0:00:48.791 ********** 2025-05-31 16:33:05.909702 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:33:05.909712 | orchestrator | 2025-05-31 16:33:05.909722 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-05-31 16:33:05.909735 | orchestrator | Saturday 31 May 2025 16:29:23 +0000 (0:00:01.280) 0:00:50.072 ********** 2025-05-31 16:33:05.909746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-31 16:33:05.909764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-31 16:33:05.909777 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-31 16:33:05.909788 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-31 16:33:05.909823 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-31 16:33:05.909835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-31 16:33:05.909846 | orchestrator | 2025-05-31 16:33:05.909857 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-05-31 16:33:05.909868 | orchestrator | Saturday 31 May 2025 16:29:29 +0000 (0:00:05.761) 0:00:55.833 ********** 2025-05-31 16:33:05.909921 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.909934 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:33:05.909946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.909962 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:33:05.909974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.909984 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:33:05.910000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.910011 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:33:05.910067 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.910078 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:33:05.910096 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.910106 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:33:05.910116 | orchestrator | 2025-05-31 16:33:05.910126 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-05-31 16:33:05.910143 | orchestrator | Saturday 31 May 2025 16:29:33 +0000 (0:00:03.893) 0:00:59.727 ********** 2025-05-31 16:33:05.910153 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.910163 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:33:05.910196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.910207 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:33:05.910217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.910227 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:33:05.910243 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.910253 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:33:05.910264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.910290 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:33:05.910301 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.910311 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:33:05.910320 | orchestrator | 2025-05-31 16:33:05.910330 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-05-31 16:33:05.910340 | orchestrator | Saturday 31 May 2025 16:29:37 +0000 (0:00:04.463) 0:01:04.190 ********** 2025-05-31 16:33:05.910349 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:33:05.910359 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:33:05.910368 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:33:05.910383 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:33:05.910399 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:33:05.910415 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:33:05.910429 | orchestrator | 2025-05-31 16:33:05.910438 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-05-31 16:33:05.910448 | orchestrator | Saturday 31 May 2025 16:29:41 +0000 (0:00:04.060) 0:01:08.251 ********** 2025-05-31 16:33:05.910462 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:33:05.910472 | orchestrator | 2025-05-31 16:33:05.910481 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-05-31 16:33:05.910491 | orchestrator | Saturday 31 May 2025 16:29:41 +0000 (0:00:00.136) 0:01:08.387 ********** 2025-05-31 16:33:05.910500 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:33:05.910510 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:33:05.910519 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:33:05.910528 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:33:05.910538 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:33:05.910547 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:33:05.910556 | orchestrator | 2025-05-31 16:33:05.910566 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-05-31 16:33:05.910575 | orchestrator | Saturday 31 May 2025 16:29:42 +0000 (0:00:01.014) 0:01:09.402 ********** 2025-05-31 16:33:05.910586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.910621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.910641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.910654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.910669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.910679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.910701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.910712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.910722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.910732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.910746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.910782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.910793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.910815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.910826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.910837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.910851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.910861 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:33:05.910871 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.910915 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.910926 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.910936 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.910953 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.910994 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.911010 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.911035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.911046 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.911074 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.911085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.911099 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.911115 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.911131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.911141 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.911151 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.911170 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.911185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.911200 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.911217 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.911227 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.911237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.911251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.911268 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:33:05.911286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.911296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.911312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.911322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.911332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.911342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.911356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.911371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.911387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.911398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.911408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.911418 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:33:05.911432 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.911447 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.911463 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.911473 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.911490 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.911501 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.911533 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.911544 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.911554 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.911570 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.911581 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.911591 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.911630 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.911653 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.911663 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.911680 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.911691 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.911701 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:33:05.911711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.911730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.911740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.911755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.911765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.911775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.911790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.911805 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.911815 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.911831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.911841 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.911851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.911871 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.911904 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.911930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.911948 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.911960 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.911970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.911987 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.912001 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.912011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.912027 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.912037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.912047 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.912063 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.912073 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.912087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.912097 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.912389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.912409 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.912429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.912445 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.912455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.912488 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.912499 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:33:05.912509 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:33:05.912519 | orchestrator | 2025-05-31 16:33:05.912529 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-05-31 16:33:05.912539 | orchestrator | Saturday 31 May 2025 16:29:46 +0000 (0:00:04.135) 0:01:13.537 ********** 2025-05-31 16:33:05.912549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-31 16:33:05.912566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.912580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.912590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.912621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.912632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.912652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.912662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.912676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.912686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.912716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.912727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.912763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.912775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.912799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.912810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.912842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.912853 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.912869 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.912909 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.912921 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.912954 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.912971 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.912981 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.912991 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.913005 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.913018 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.913058 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.913071 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.913083 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.913095 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.913107 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.913123 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.913134 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.913558 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.913579 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.913589 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.913606 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.913616 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.913698 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.913721 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.913732 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.913742 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.913752 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.913767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-31 16:33:05.913804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.913815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.913833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.913851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.913922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.914302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.914335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.914381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.914398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.914413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.914428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.914450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.914469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.914499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.914510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.914518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.914530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-31 16:33:05.914543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.914570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.914655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.914665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.915033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.915066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.915219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.915396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.915414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.915422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.915431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.915674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.915717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.915802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.915816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.915825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.915833 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-31 16:33:05.915846 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.915862 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-31 16:33:05.916040 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.916059 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.916067 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.916075 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.916092 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.916120 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.916211 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.916233 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.916283 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.916301 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.916322 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.916622 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.916711 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.916731 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-31 16:33:05.916742 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.916749 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.916768 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.916775 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.916828 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.916839 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.916846 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.916855 | orchestrator | 2025-05-31 16:33:05.916999 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-05-31 16:33:05.917028 | orchestrator | Saturday 31 May 2025 16:29:50 +0000 (0:00:03.993) 0:01:17.531 ********** 2025-05-31 16:33:05.917047 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.917320 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.917398 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.917411 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.917418 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.917435 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.917447 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.917455 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.917462 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.917521 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.917539 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.917555 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.917566 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.917573 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.917683 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.917700 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.917713 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.917734 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.917751 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.917764 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.918116 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.918160 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.918176 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.918183 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.918195 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.918202 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.918325 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.918345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-31 16:33:05.918365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.918377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.918393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.918406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.918487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.918713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.918732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.918740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.918751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.918758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.918838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.918849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.918862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.918869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.919002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.919053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.919194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-31 16:33:05.919213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.919220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.919230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.919255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.919307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.919316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.919329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.919335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.919342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.919352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.919358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.919426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.919442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.919448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.919456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.919465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.919472 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-31 16:33:05.919807 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.919831 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.919839 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.919846 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.919857 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.919865 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.919945 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.919956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-31 16:33:05.919963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.919973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.919979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.920028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.921328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.921344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.921351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.921375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.921382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.921425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.921439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.921445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.921452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.921461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.921469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.921478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.921489 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-31 16:33:05.921496 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.921503 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.921509 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.921518 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.921532 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.921539 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.921545 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.921552 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-31 16:33:05.921561 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.921567 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.921577 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.921587 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.921594 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.921601 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.921609 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.921619 | orchestrator | 2025-05-31 16:33:05.921626 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-05-31 16:33:05.921633 | orchestrator | Saturday 31 May 2025 16:29:58 +0000 (0:00:07.493) 0:01:25.024 ********** 2025-05-31 16:33:05.921647 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.921658 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.921665 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.921671 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.921680 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.921690 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.921696 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.921706 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.921713 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.921719 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.921726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.921738 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.921745 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.921754 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.921761 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.921768 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.921774 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.921786 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:33:05.921795 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.921805 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.921812 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.921819 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.921825 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.921837 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.921844 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.921850 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.921860 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.921867 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.921873 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.921902 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.921910 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.921916 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.921927 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.921935 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.921943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-31 16:33:05.921957 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.921964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.921971 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:33:05.921983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.921991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.921998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.922011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.922046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.922054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.922066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.922073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.922081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.922091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.922102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.922109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.922120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.922128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.922136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.922146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-31 16:33:05.922156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.922164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.922175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.922182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.922192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.922204 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.922212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.922223 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.922231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.922238 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.922249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.922259 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.922266 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.922278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.922285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.922294 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.922301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.922335 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.922343 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.922349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.922360 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.922366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.922376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.922386 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.922393 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.922399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.922410 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.922420 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.922427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.922433 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.922442 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.922449 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.922459 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.922468 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:33:05.922475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-31 16:33:05.922482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.922490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.922497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.922507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.922517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.922524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.922530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.922539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.922545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.922555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.922565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.922571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.922578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.922587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.922594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.922603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.922626 | orchestrator | 2025-05-31 16:33:05.922633 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-05-31 16:33:05.922639 | orchestrator | Saturday 31 May 2025 16:30:01 +0000 (0:00:03.104) 0:01:28.129 ********** 2025-05-31 16:33:05.922645 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:33:05.922651 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:33:05.922657 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:33:05.922663 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:33:05.922669 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:33:05.922675 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:33:05.922681 | orchestrator | 2025-05-31 16:33:05.922703 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-05-31 16:33:05.922710 | orchestrator | Saturday 31 May 2025 16:30:06 +0000 (0:00:04.612) 0:01:32.741 ********** 2025-05-31 16:33:05.922717 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.922723 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.922744 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.922751 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.922765 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.922772 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.922779 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.922785 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.922794 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.922801 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.922814 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.922820 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.922827 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.922833 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.922842 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.922849 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.922862 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.922868 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:33:05.922875 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.922933 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.922943 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.922950 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.922964 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.922971 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.922978 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.922984 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.922991 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.922999 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.923009 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.923044 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.923052 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.923058 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.923065 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.923075 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.923084 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.923091 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:33:05.923137 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.923146 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.923152 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.923162 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.923172 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.923216 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.923224 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.923231 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.923238 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.923278 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.923289 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.923311 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.923323 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.923330 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.923336 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.923350 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.923357 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.923363 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:33:05.923373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-31 16:33:05.923380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.923386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.923392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.923402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.923409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.923418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.923425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.923447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.923454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.923466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.923473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.923479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.923490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.923496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.923506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.923515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.923522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-31 16:33:05.923533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.923539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.923546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.923558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.923565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.923571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.923581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.923588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.923594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.923622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.923639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.923646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.923652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.923663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.923720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.923727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.923739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-31 16:33:05.923746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.923756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.923763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.923773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.923810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.923818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.923825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.923835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.923842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.923852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.923888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.923899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.923906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.923916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.923927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.923934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.923940 | orchestrator | 2025-05-31 16:33:05.923946 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-05-31 16:33:05.923952 | orchestrator | Saturday 31 May 2025 16:30:10 +0000 (0:00:04.500) 0:01:37.242 ********** 2025-05-31 16:33:05.923959 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:33:05.923965 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:33:05.923992 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:33:05.923999 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:33:05.924005 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:33:05.924011 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:33:05.924017 | orchestrator | 2025-05-31 16:33:05.924023 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-05-31 16:33:05.924029 | orchestrator | Saturday 31 May 2025 16:30:12 +0000 (0:00:01.952) 0:01:39.194 ********** 2025-05-31 16:33:05.924035 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:33:05.924041 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:33:05.924047 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:33:05.924053 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:33:05.924062 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:33:05.924068 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:33:05.924074 | orchestrator | 2025-05-31 16:33:05.924080 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-05-31 16:33:05.924086 | orchestrator | Saturday 31 May 2025 16:30:14 +0000 (0:00:02.170) 0:01:41.365 ********** 2025-05-31 16:33:05.924092 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:33:05.924098 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:33:05.924104 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:33:05.924110 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:33:05.924116 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:33:05.924122 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:33:05.924128 | orchestrator | 2025-05-31 16:33:05.924134 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-05-31 16:33:05.924140 | orchestrator | Saturday 31 May 2025 16:30:17 +0000 (0:00:02.441) 0:01:43.806 ********** 2025-05-31 16:33:05.924146 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:33:05.924152 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:33:05.924158 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:33:05.924164 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:33:05.924170 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:33:05.924176 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:33:05.924185 | orchestrator | 2025-05-31 16:33:05.924191 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-05-31 16:33:05.924197 | orchestrator | Saturday 31 May 2025 16:30:19 +0000 (0:00:02.165) 0:01:45.972 ********** 2025-05-31 16:33:05.924203 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:33:05.924209 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:33:05.924215 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:33:05.924221 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:33:05.924227 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:33:05.924233 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:33:05.924239 | orchestrator | 2025-05-31 16:33:05.924245 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-05-31 16:33:05.924251 | orchestrator | Saturday 31 May 2025 16:30:21 +0000 (0:00:02.174) 0:01:48.147 ********** 2025-05-31 16:33:05.924257 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:33:05.924263 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:33:05.924269 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:33:05.924275 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:33:05.924284 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:33:05.924290 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:33:05.924296 | orchestrator | 2025-05-31 16:33:05.924302 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-05-31 16:33:05.924308 | orchestrator | Saturday 31 May 2025 16:30:23 +0000 (0:00:01.796) 0:01:49.944 ********** 2025-05-31 16:33:05.924314 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-31 16:33:05.924320 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:33:05.924327 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-31 16:33:05.924333 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:33:05.924339 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-31 16:33:05.924345 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:33:05.924351 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-31 16:33:05.924357 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:33:05.924363 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-31 16:33:05.924369 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:33:05.924375 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-31 16:33:05.924381 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:33:05.924387 | orchestrator | 2025-05-31 16:33:05.924393 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-05-31 16:33:05.924399 | orchestrator | Saturday 31 May 2025 16:30:25 +0000 (0:00:02.516) 0:01:52.460 ********** 2025-05-31 16:33:05.924405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.924415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.924425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.924443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.924450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.924456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.924463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.924475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.924481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.924490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.924497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.924504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.924526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.924534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.924547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.924554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.924589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.924597 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:33:05.924604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.924610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.924625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.924632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.924658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutr2025-05-31 16:33:05 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:33:05.924666 | orchestrator | on-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.924672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.924679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.924689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.924699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.924705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.924716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.924723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.924730 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.924753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.924763 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.924769 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.924780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.924787 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.924793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.924814 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.924821 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.924832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.924839 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.924845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.924855 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.924861 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:33:05.924868 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.924888 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.924895 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.924905 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.924912 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.924922 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.924944 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.924952 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.924963 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.924969 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.924981 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.924988 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.924997 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.925003 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:33:05.925010 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.925020 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.925027 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.925037 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.925044 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.925053 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.925059 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.925066 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.925076 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.925089 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.925095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.925104 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.925111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.925121 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.925132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.925139 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.925148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.925154 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:33:05.925161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.925167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.925177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.925188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.925194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.925201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.925210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.925216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.925226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.925236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.925242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.925249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.925258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.925265 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:33:05.925271 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.925285 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.925291 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.925298 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.925307 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.925313 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.925320 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.925332 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.925338 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.925345 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.925354 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.925360 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.925367 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.925380 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.925387 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.925394 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.925402 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.925409 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:33:05.925415 | orchestrator | 2025-05-31 16:33:05.925421 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-05-31 16:33:05.925428 | orchestrator | Saturday 31 May 2025 16:30:28 +0000 (0:00:02.257) 0:01:54.717 ********** 2025-05-31 16:33:05.925434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.925535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.925545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.925552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.925563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.925570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.925588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.925598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.925605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.925611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.925645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.925656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.925667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.925674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.925684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.925692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.925698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.925704 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:33:05.925714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.925724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.925733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.925740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.925747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.925756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.925766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.925772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.925782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.925789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.925795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.925802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.925815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.925821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.925831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.925838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.925844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.925851 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:33:05.925859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.925870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.925890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.925897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.925903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.925910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.925922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.925929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.925935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.925945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.925951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.925958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.925968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.925979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.925985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.925996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.926003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.926009 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:33:05.926032 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.926048 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.926054 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.926064 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.926071 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.926077 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.926087 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.926096 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.926103 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.926114 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.926120 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.926127 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.926136 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.926145 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.926152 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.926163 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.926171 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.926178 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:33:05.926185 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.926199 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.926206 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.926214 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.926230 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.926237 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.926248 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.926277 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.926288 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.926295 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.926322 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.926331 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.926342 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.926350 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.926361 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.926369 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.926380 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.926387 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:33:05.926395 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.926417 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.926428 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.926436 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.926446 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.926468 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.926481 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.926489 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.926499 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.926507 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.926514 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.926524 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.926534 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.926540 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.926549 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.926556 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.926563 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.926569 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:33:05.926575 | orchestrator | 2025-05-31 16:33:05.926582 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-05-31 16:33:05.926594 | orchestrator | Saturday 31 May 2025 16:30:30 +0000 (0:00:01.979) 0:01:56.697 ********** 2025-05-31 16:33:05.926601 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:33:05.926607 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:33:05.926613 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:33:05.926619 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:33:05.926625 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:33:05.926631 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:33:05.926638 | orchestrator | 2025-05-31 16:33:05.926644 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-05-31 16:33:05.926650 | orchestrator | Saturday 31 May 2025 16:30:31 +0000 (0:00:01.904) 0:01:58.601 ********** 2025-05-31 16:33:05.926656 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:33:05.926662 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:33:05.926668 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:33:05.926674 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:33:05.926680 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:33:05.926686 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:33:05.926691 | orchestrator | 2025-05-31 16:33:05.926698 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-05-31 16:33:05.926704 | orchestrator | Saturday 31 May 2025 16:30:37 +0000 (0:00:05.956) 0:02:04.558 ********** 2025-05-31 16:33:05.926710 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:33:05.926716 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:33:05.926722 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:33:05.926727 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:33:05.926733 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:33:05.926739 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:33:05.926745 | orchestrator | 2025-05-31 16:33:05.926751 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-05-31 16:33:05.926757 | orchestrator | Saturday 31 May 2025 16:30:39 +0000 (0:00:01.823) 0:02:06.381 ********** 2025-05-31 16:33:05.926763 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:33:05.926769 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:33:05.926775 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:33:05.926781 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:33:05.926787 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:33:05.926793 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:33:05.926799 | orchestrator | 2025-05-31 16:33:05.926805 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-05-31 16:33:05.926811 | orchestrator | Saturday 31 May 2025 16:30:41 +0000 (0:00:02.130) 0:02:08.512 ********** 2025-05-31 16:33:05.926817 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:33:05.926823 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:33:05.926829 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:33:05.926835 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:33:05.926841 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:33:05.926847 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:33:05.926853 | orchestrator | 2025-05-31 16:33:05.926859 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-05-31 16:33:05.926865 | orchestrator | Saturday 31 May 2025 16:30:45 +0000 (0:00:03.266) 0:02:11.778 ********** 2025-05-31 16:33:05.926871 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:33:05.926887 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:33:05.926893 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:33:05.926899 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:33:05.926905 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:33:05.926911 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:33:05.926917 | orchestrator | 2025-05-31 16:33:05.926923 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-05-31 16:33:05.926929 | orchestrator | Saturday 31 May 2025 16:30:47 +0000 (0:00:02.718) 0:02:14.496 ********** 2025-05-31 16:33:05.926939 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:33:05.926947 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:33:05.926954 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:33:05.926960 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:33:05.926966 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:33:05.926972 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:33:05.926978 | orchestrator | 2025-05-31 16:33:05.926984 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-05-31 16:33:05.926990 | orchestrator | Saturday 31 May 2025 16:30:49 +0000 (0:00:01.777) 0:02:16.274 ********** 2025-05-31 16:33:05.926996 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:33:05.927002 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:33:05.927008 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:33:05.927014 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:33:05.927020 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:33:05.927025 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:33:05.927031 | orchestrator | 2025-05-31 16:33:05.927037 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-05-31 16:33:05.927043 | orchestrator | Saturday 31 May 2025 16:30:53 +0000 (0:00:04.247) 0:02:20.522 ********** 2025-05-31 16:33:05.927049 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:33:05.927055 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:33:05.927061 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:33:05.927067 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:33:05.927073 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:33:05.927079 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:33:05.927085 | orchestrator | 2025-05-31 16:33:05.927091 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-05-31 16:33:05.927097 | orchestrator | Saturday 31 May 2025 16:30:55 +0000 (0:00:01.929) 0:02:22.451 ********** 2025-05-31 16:33:05.927103 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:33:05.927109 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:33:05.927115 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:33:05.927121 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:33:05.927127 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:33:05.927133 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:33:05.927139 | orchestrator | 2025-05-31 16:33:05.927145 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-05-31 16:33:05.927151 | orchestrator | Saturday 31 May 2025 16:30:57 +0000 (0:00:01.701) 0:02:24.153 ********** 2025-05-31 16:33:05.927157 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-31 16:33:05.927167 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:33:05.927173 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-31 16:33:05.927179 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:33:05.927185 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-31 16:33:05.927191 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:33:05.927198 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-31 16:33:05.927204 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:33:05.927210 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-31 16:33:05.927216 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:33:05.927222 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-31 16:33:05.927228 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:33:05.927234 | orchestrator | 2025-05-31 16:33:05.927240 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-05-31 16:33:05.927246 | orchestrator | Saturday 31 May 2025 16:30:59 +0000 (0:00:01.927) 0:02:26.080 ********** 2025-05-31 16:33:05.927252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.927265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.927272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.927278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.927288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.927299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.927305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.927312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.927321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.927327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.927338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.927345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.927355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.927361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.927378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.927386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.927397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.927404 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:33:05.927414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.927420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.927429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.927436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.927446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.927458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.927465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.927471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.927480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.927487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.927493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.927503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.927513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.927519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.927528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.927535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.927541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.927558 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:33:05.927570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.927576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.927583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.927592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.927599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.927612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.927619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.927626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.927632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.927641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.927647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.927657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.927667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.927673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.927680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.927689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.927696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.927712 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:33:05.927723 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.927730 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.927736 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.927745 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.927752 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.927838 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.927848 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.927854 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.927861 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.927867 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.927896 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.927904 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.927953 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.927961 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.927968 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.927975 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.927987 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.927997 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:33:05.928004 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.928014 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928021 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928027 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928036 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.928046 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928056 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.928062 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.928069 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928075 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.928092 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928102 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.928109 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.928118 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928125 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.928131 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.928141 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928151 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:33:05.928170 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.928180 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928187 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928193 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928202 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.928212 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928219 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.928228 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.928235 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928241 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.928248 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928260 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.928267 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.928273 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928283 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.928290 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.928297 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928306 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:33:05.928313 | orchestrator | 2025-05-31 16:33:05.928319 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-05-31 16:33:05.928325 | orchestrator | Saturday 31 May 2025 16:31:01 +0000 (0:00:02.406) 0:02:28.486 ********** 2025-05-31 16:33:05.928334 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.928341 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928351 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928357 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928364 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.928378 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928385 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.928391 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.928401 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928408 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.928415 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 16:33:05.928427 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928434 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928444 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928450 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928457 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.928470 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928478 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928485 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.928496 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928504 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.928512 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.928525 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928533 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928541 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.928551 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.928558 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928565 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-31 16:33:05.928576 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928586 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.928593 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.928601 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928611 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.928619 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.928630 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-31 16:33:05.928647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.928687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.928702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.928713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.928731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.928748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.928756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.928804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.928819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928826 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-31 16:33:05.928835 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928841 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.928851 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.928857 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928867 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.928874 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.928896 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928903 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-31 16:33:05.928913 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928923 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.928930 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.928936 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928945 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.928952 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.928962 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-31 16:33:05.928979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.928994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.929003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.929014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.929020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.929027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.929036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.929042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.929049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.929061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.929068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.929075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-31 16:33:05.929093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.929100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.929107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.929121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.929128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.929134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.929141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 16:33:05.929147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.929159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.929166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.929173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.929179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.929203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.929210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.929229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:33:05.929240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:33:05.929246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.929253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 16:33:05.929262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 16:33:05.929269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 16:33:05.929279 | orchestrator | 2025-05-31 16:33:05.929285 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-31 16:33:05.929291 | orchestrator | Saturday 31 May 2025 16:31:05 +0000 (0:00:03.417) 0:02:31.904 ********** 2025-05-31 16:33:05.929297 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:33:05.929304 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:33:05.929310 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:33:05.929316 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:33:05.929322 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:33:05.929328 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:33:05.929334 | orchestrator | 2025-05-31 16:33:05.929340 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-05-31 16:33:05.929346 | orchestrator | Saturday 31 May 2025 16:31:05 +0000 (0:00:00.552) 0:02:32.457 ********** 2025-05-31 16:33:05.929352 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:33:05.929358 | orchestrator | 2025-05-31 16:33:05.929367 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-05-31 16:33:05.929373 | orchestrator | Saturday 31 May 2025 16:31:08 +0000 (0:00:02.762) 0:02:35.220 ********** 2025-05-31 16:33:05.929379 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:33:05.929385 | orchestrator | 2025-05-31 16:33:05.929391 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-05-31 16:33:05.929397 | orchestrator | Saturday 31 May 2025 16:31:11 +0000 (0:00:02.617) 0:02:37.838 ********** 2025-05-31 16:33:05.929403 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:33:05.929409 | orchestrator | 2025-05-31 16:33:05.929416 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-31 16:33:05.929422 | orchestrator | Saturday 31 May 2025 16:31:51 +0000 (0:00:40.356) 0:03:18.194 ********** 2025-05-31 16:33:05.929428 | orchestrator | 2025-05-31 16:33:05.929434 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-31 16:33:05.929440 | orchestrator | Saturday 31 May 2025 16:31:51 +0000 (0:00:00.062) 0:03:18.257 ********** 2025-05-31 16:33:05.929446 | orchestrator | 2025-05-31 16:33:05.929452 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-31 16:33:05.929458 | orchestrator | Saturday 31 May 2025 16:31:51 +0000 (0:00:00.231) 0:03:18.488 ********** 2025-05-31 16:33:05.929463 | orchestrator | 2025-05-31 16:33:05.929470 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-31 16:33:05.929475 | orchestrator | Saturday 31 May 2025 16:31:51 +0000 (0:00:00.053) 0:03:18.542 ********** 2025-05-31 16:33:05.929481 | orchestrator | 2025-05-31 16:33:05.929487 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-31 16:33:05.929493 | orchestrator | Saturday 31 May 2025 16:31:51 +0000 (0:00:00.051) 0:03:18.594 ********** 2025-05-31 16:33:05.929499 | orchestrator | 2025-05-31 16:33:05.929505 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-31 16:33:05.929511 | orchestrator | Saturday 31 May 2025 16:31:51 +0000 (0:00:00.053) 0:03:18.648 ********** 2025-05-31 16:33:05.929517 | orchestrator | 2025-05-31 16:33:05.929523 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-05-31 16:33:05.929529 | orchestrator | Saturday 31 May 2025 16:31:52 +0000 (0:00:00.246) 0:03:18.895 ********** 2025-05-31 16:33:05.929535 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:33:05.929541 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:33:05.929547 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:33:05.929554 | orchestrator | 2025-05-31 16:33:05.929563 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-05-31 16:33:05.929569 | orchestrator | Saturday 31 May 2025 16:32:17 +0000 (0:00:25.201) 0:03:44.096 ********** 2025-05-31 16:33:05.929575 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:33:05.929581 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:33:05.929587 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:33:05.929593 | orchestrator | 2025-05-31 16:33:05.929599 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:33:05.929605 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-31 16:33:05.929612 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-05-31 16:33:05.929618 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-05-31 16:33:05.929627 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-31 16:33:05.929633 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-31 16:33:05.929639 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-31 16:33:05.929645 | orchestrator | 2025-05-31 16:33:05.929651 | orchestrator | 2025-05-31 16:33:05.929657 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 16:33:05.929663 | orchestrator | Saturday 31 May 2025 16:33:05 +0000 (0:00:48.000) 0:04:32.097 ********** 2025-05-31 16:33:05.929669 | orchestrator | =============================================================================== 2025-05-31 16:33:05.929675 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 48.00s 2025-05-31 16:33:05.929681 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 40.36s 2025-05-31 16:33:05.929687 | orchestrator | neutron : Restart neutron-server container ----------------------------- 25.20s 2025-05-31 16:33:05.929693 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.64s 2025-05-31 16:33:05.929699 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.49s 2025-05-31 16:33:05.929705 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.96s 2025-05-31 16:33:05.929711 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 5.96s 2025-05-31 16:33:05.929717 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 5.76s 2025-05-31 16:33:05.929723 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 4.61s 2025-05-31 16:33:05.929729 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.50s 2025-05-31 16:33:05.929735 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 4.46s 2025-05-31 16:33:05.929744 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.25s 2025-05-31 16:33:05.929751 | orchestrator | neutron : Copying over nsx.ini ------------------------------------------ 4.25s 2025-05-31 16:33:05.929757 | orchestrator | neutron : Copying over existing policy file ----------------------------- 4.14s 2025-05-31 16:33:05.929763 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 4.06s 2025-05-31 16:33:05.929769 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.99s 2025-05-31 16:33:05.929775 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 3.89s 2025-05-31 16:33:05.929781 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.83s 2025-05-31 16:33:05.929787 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.53s 2025-05-31 16:33:05.929797 | orchestrator | neutron : Check neutron containers -------------------------------------- 3.42s 2025-05-31 16:33:08.930332 | orchestrator | 2025-05-31 16:33:08 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:33:08.931585 | orchestrator | 2025-05-31 16:33:08 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:33:08.932261 | orchestrator | 2025-05-31 16:33:08 | INFO  | Task cb3ea6d6-2fb0-4f81-bfef-9f0e8321bafd is in state STARTED 2025-05-31 16:33:08.932959 | orchestrator | 2025-05-31 16:33:08 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:33:08.933538 | orchestrator | 2025-05-31 16:33:08 | INFO  | Task 1a9424ce-3c9e-4555-8474-ef427821be6b is in state STARTED 2025-05-31 16:33:08.933655 | orchestrator | 2025-05-31 16:33:08 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:33:11.980137 | orchestrator | 2025-05-31 16:33:11 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:33:11.981175 | orchestrator | 2025-05-31 16:33:11 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:33:11.982462 | orchestrator | 2025-05-31 16:33:11 | INFO  | Task cb3ea6d6-2fb0-4f81-bfef-9f0e8321bafd is in state STARTED 2025-05-31 16:33:11.983567 | orchestrator | 2025-05-31 16:33:11 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:33:11.984544 | orchestrator | 2025-05-31 16:33:11 | INFO  | Task 1a9424ce-3c9e-4555-8474-ef427821be6b is in state STARTED 2025-05-31 16:33:11.984773 | orchestrator | 2025-05-31 16:33:11 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:33:15.038471 | orchestrator | 2025-05-31 16:33:15 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:33:15.041541 | orchestrator | 2025-05-31 16:33:15 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:33:15.044067 | orchestrator | 2025-05-31 16:33:15 | INFO  | Task cb3ea6d6-2fb0-4f81-bfef-9f0e8321bafd is in state STARTED 2025-05-31 16:33:15.047993 | orchestrator | 2025-05-31 16:33:15 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:33:15.052245 | orchestrator | 2025-05-31 16:33:15 | INFO  | Task 1a9424ce-3c9e-4555-8474-ef427821be6b is in state STARTED 2025-05-31 16:33:15.052272 | orchestrator | 2025-05-31 16:33:15 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:33:18.090706 | orchestrator | 2025-05-31 16:33:18 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:33:18.093345 | orchestrator | 2025-05-31 16:33:18 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:33:18.098442 | orchestrator | 2025-05-31 16:33:18 | INFO  | Task cb3ea6d6-2fb0-4f81-bfef-9f0e8321bafd is in state STARTED 2025-05-31 16:33:18.101301 | orchestrator | 2025-05-31 16:33:18 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:33:18.107571 | orchestrator | 2025-05-31 16:33:18 | INFO  | Task 1a9424ce-3c9e-4555-8474-ef427821be6b is in state STARTED 2025-05-31 16:33:18.107601 | orchestrator | 2025-05-31 16:33:18 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:33:21.143695 | orchestrator | 2025-05-31 16:33:21 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:33:21.144011 | orchestrator | 2025-05-31 16:33:21 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:33:21.144474 | orchestrator | 2025-05-31 16:33:21 | INFO  | Task cb3ea6d6-2fb0-4f81-bfef-9f0e8321bafd is in state STARTED 2025-05-31 16:33:21.145000 | orchestrator | 2025-05-31 16:33:21 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:33:21.145976 | orchestrator | 2025-05-31 16:33:21 | INFO  | Task 1a9424ce-3c9e-4555-8474-ef427821be6b is in state STARTED 2025-05-31 16:33:21.146003 | orchestrator | 2025-05-31 16:33:21 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:33:24.184288 | orchestrator | 2025-05-31 16:33:24 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:33:24.184372 | orchestrator | 2025-05-31 16:33:24 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:33:24.184712 | orchestrator | 2025-05-31 16:33:24 | INFO  | Task cb3ea6d6-2fb0-4f81-bfef-9f0e8321bafd is in state STARTED 2025-05-31 16:33:24.185295 | orchestrator | 2025-05-31 16:33:24 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:33:24.185884 | orchestrator | 2025-05-31 16:33:24 | INFO  | Task 1a9424ce-3c9e-4555-8474-ef427821be6b is in state STARTED 2025-05-31 16:33:24.185945 | orchestrator | 2025-05-31 16:33:24 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:33:27.242521 | orchestrator | 2025-05-31 16:33:27 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:33:27.243879 | orchestrator | 2025-05-31 16:33:27 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:33:27.244423 | orchestrator | 2025-05-31 16:33:27 | INFO  | Task cb3ea6d6-2fb0-4f81-bfef-9f0e8321bafd is in state SUCCESS 2025-05-31 16:33:27.245866 | orchestrator | 2025-05-31 16:33:27.245939 | orchestrator | 2025-05-31 16:33:27.245954 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-31 16:33:27.245966 | orchestrator | 2025-05-31 16:33:27.245977 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-31 16:33:27.245988 | orchestrator | Saturday 31 May 2025 16:31:36 +0000 (0:00:00.289) 0:00:00.289 ********** 2025-05-31 16:33:27.246000 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:33:27.246011 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:33:27.246102 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:33:27.246114 | orchestrator | 2025-05-31 16:33:27.246303 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-31 16:33:27.246323 | orchestrator | Saturday 31 May 2025 16:31:36 +0000 (0:00:00.372) 0:00:00.661 ********** 2025-05-31 16:33:27.246334 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-05-31 16:33:27.246346 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-05-31 16:33:27.246356 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-05-31 16:33:27.246367 | orchestrator | 2025-05-31 16:33:27.246378 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-05-31 16:33:27.246389 | orchestrator | 2025-05-31 16:33:27.246399 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-31 16:33:27.246410 | orchestrator | Saturday 31 May 2025 16:31:37 +0000 (0:00:00.291) 0:00:00.953 ********** 2025-05-31 16:33:27.246421 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:33:27.246432 | orchestrator | 2025-05-31 16:33:27.246443 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-05-31 16:33:27.246453 | orchestrator | Saturday 31 May 2025 16:31:37 +0000 (0:00:00.683) 0:00:01.637 ********** 2025-05-31 16:33:27.246465 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-05-31 16:33:27.246477 | orchestrator | 2025-05-31 16:33:27.246502 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-05-31 16:33:27.246515 | orchestrator | Saturday 31 May 2025 16:31:41 +0000 (0:00:03.487) 0:00:05.124 ********** 2025-05-31 16:33:27.246527 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-05-31 16:33:27.246558 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-05-31 16:33:27.246571 | orchestrator | 2025-05-31 16:33:27.246582 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-05-31 16:33:27.246594 | orchestrator | Saturday 31 May 2025 16:31:48 +0000 (0:00:07.009) 0:00:12.133 ********** 2025-05-31 16:33:27.246606 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-31 16:33:27.246619 | orchestrator | 2025-05-31 16:33:27.246630 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-05-31 16:33:27.246643 | orchestrator | Saturday 31 May 2025 16:31:51 +0000 (0:00:03.663) 0:00:15.797 ********** 2025-05-31 16:33:27.246655 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-31 16:33:27.246667 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-05-31 16:33:27.246679 | orchestrator | 2025-05-31 16:33:27.246691 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-05-31 16:33:27.246703 | orchestrator | Saturday 31 May 2025 16:31:55 +0000 (0:00:04.109) 0:00:19.906 ********** 2025-05-31 16:33:27.246715 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-31 16:33:27.246727 | orchestrator | 2025-05-31 16:33:27.246738 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-05-31 16:33:27.246749 | orchestrator | Saturday 31 May 2025 16:31:59 +0000 (0:00:03.237) 0:00:23.144 ********** 2025-05-31 16:33:27.246760 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-05-31 16:33:27.246771 | orchestrator | 2025-05-31 16:33:27.246781 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-05-31 16:33:27.246792 | orchestrator | Saturday 31 May 2025 16:32:03 +0000 (0:00:04.296) 0:00:27.441 ********** 2025-05-31 16:33:27.246803 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:33:27.246813 | orchestrator | 2025-05-31 16:33:27.246824 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-05-31 16:33:27.246834 | orchestrator | Saturday 31 May 2025 16:32:07 +0000 (0:00:03.562) 0:00:31.003 ********** 2025-05-31 16:33:27.246845 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:33:27.246856 | orchestrator | 2025-05-31 16:33:27.246866 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-05-31 16:33:27.246877 | orchestrator | Saturday 31 May 2025 16:32:11 +0000 (0:00:04.370) 0:00:35.374 ********** 2025-05-31 16:33:27.246910 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:33:27.246923 | orchestrator | 2025-05-31 16:33:27.246934 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-05-31 16:33:27.246944 | orchestrator | Saturday 31 May 2025 16:32:15 +0000 (0:00:03.906) 0:00:39.280 ********** 2025-05-31 16:33:27.246973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-31 16:33:27.246988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-31 16:33:27.247012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-31 16:33:27.247024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-31 16:33:27.247036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-31 16:33:27.247054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-31 16:33:27.247066 | orchestrator | 2025-05-31 16:33:27.247077 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-05-31 16:33:27.247088 | orchestrator | Saturday 31 May 2025 16:32:17 +0000 (0:00:02.162) 0:00:41.443 ********** 2025-05-31 16:33:27.247105 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:33:27.247116 | orchestrator | 2025-05-31 16:33:27.247127 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-05-31 16:33:27.247137 | orchestrator | Saturday 31 May 2025 16:32:17 +0000 (0:00:00.098) 0:00:41.541 ********** 2025-05-31 16:33:27.247148 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:33:27.247159 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:33:27.247172 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:33:27.247191 | orchestrator | 2025-05-31 16:33:27.247210 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-05-31 16:33:27.247228 | orchestrator | Saturday 31 May 2025 16:32:17 +0000 (0:00:00.307) 0:00:41.848 ********** 2025-05-31 16:33:27.247240 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-31 16:33:27.247250 | orchestrator | 2025-05-31 16:33:27.247261 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-05-31 16:33:27.247272 | orchestrator | Saturday 31 May 2025 16:32:18 +0000 (0:00:00.669) 0:00:42.517 ********** 2025-05-31 16:33:27.247288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-31 16:33:27.247301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:33:27.247312 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:33:27.247324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-31 16:33:27.247344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:33:27.247363 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:33:27.247374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-31 16:33:27.247390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:33:27.247402 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:33:27.247413 | orchestrator | 2025-05-31 16:33:27.247423 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-05-31 16:33:27.247434 | orchestrator | Saturday 31 May 2025 16:32:20 +0000 (0:00:01.459) 0:00:43.977 ********** 2025-05-31 16:33:27.247445 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:33:27.247456 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:33:27.247466 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:33:27.247477 | orchestrator | 2025-05-31 16:33:27.247487 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-31 16:33:27.247498 | orchestrator | Saturday 31 May 2025 16:32:20 +0000 (0:00:00.712) 0:00:44.690 ********** 2025-05-31 16:33:27.247509 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:33:27.247520 | orchestrator | 2025-05-31 16:33:27.247530 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-05-31 16:33:27.247541 | orchestrator | Saturday 31 May 2025 16:32:22 +0000 (0:00:01.469) 0:00:46.159 ********** 2025-05-31 16:33:27.247552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-31 16:33:27.247575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-31 16:33:27.247592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-31 16:33:27.247604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-31 16:33:27.247616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-31 16:33:27.247626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-31 16:33:27.247643 | orchestrator | 2025-05-31 16:33:27.247654 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-05-31 16:33:27.247683 | orchestrator | Saturday 31 May 2025 16:32:25 +0000 (0:00:03.102) 0:00:49.262 ********** 2025-05-31 16:33:27.247702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-31 16:33:27.247719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:33:27.247731 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:33:27.247742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-31 16:33:27.247754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:33:27.247780 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:33:27.247792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-31 16:33:27.247811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:33:27.247823 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:33:27.247833 | orchestrator | 2025-05-31 16:33:27.247844 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-05-31 16:33:27.247855 | orchestrator | Saturday 31 May 2025 16:32:26 +0000 (0:00:00.721) 0:00:49.983 ********** 2025-05-31 16:33:27.247871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-31 16:33:27.247883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:33:27.247916 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:33:27.247929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-31 16:33:27.247958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:33:27.247970 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:33:27.247981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-31 16:33:27.247997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:33:27.248009 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:33:27.248019 | orchestrator | 2025-05-31 16:33:27.248030 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-05-31 16:33:27.248041 | orchestrator | Saturday 31 May 2025 16:32:26 +0000 (0:00:00.807) 0:00:50.790 ********** 2025-05-31 16:33:27.248052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-31 16:33:27.248070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-31 16:33:27.248345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-31 16:33:27.248371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-31 16:33:27.248384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-31 16:33:27.248395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-31 16:33:27.248414 | orchestrator | 2025-05-31 16:33:27.248426 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-05-31 16:33:27.248502 | orchestrator | Saturday 31 May 2025 16:32:29 +0000 (0:00:03.026) 0:00:53.817 ********** 2025-05-31 16:33:27.248514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-31 16:33:27.248534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-31 16:33:27.248551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-31 16:33:27.248563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-31 16:33:27.248660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-31 16:33:27.248675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-31 16:33:27.248687 | orchestrator | 2025-05-31 16:33:27.248698 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-05-31 16:33:27.248715 | orchestrator | Saturday 31 May 2025 16:32:39 +0000 (0:00:09.768) 0:01:03.586 ********** 2025-05-31 16:33:27.248728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-31 16:33:27.248744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:33:27.248756 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:33:27.248767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-31 16:33:27.248787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:33:27.248798 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:33:27.248816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-31 16:33:27.248828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:33:27.248840 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:33:27.248851 | orchestrator | 2025-05-31 16:33:27.248861 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-05-31 16:33:27.248872 | orchestrator | Saturday 31 May 2025 16:32:41 +0000 (0:00:01.784) 0:01:05.371 ********** 2025-05-31 16:33:27.248953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-31 16:33:27.248987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-31 16:33:27.249000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-31 16:33:27.249019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-31 16:33:27.249031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-31 16:33:27.249048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-31 16:33:27.249065 | orchestrator | 2025-05-31 16:33:27.249075 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-31 16:33:27.249084 | orchestrator | Saturday 31 May 2025 16:32:44 +0000 (0:00:02.969) 0:01:08.340 ********** 2025-05-31 16:33:27.249094 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:33:27.249104 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:33:27.249113 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:33:27.249122 | orchestrator | 2025-05-31 16:33:27.249132 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-05-31 16:33:27.249141 | orchestrator | Saturday 31 May 2025 16:32:44 +0000 (0:00:00.251) 0:01:08.592 ********** 2025-05-31 16:33:27.249151 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:33:27.249160 | orchestrator | 2025-05-31 16:33:27.249170 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-05-31 16:33:27.249179 | orchestrator | Saturday 31 May 2025 16:32:47 +0000 (0:00:02.352) 0:01:10.945 ********** 2025-05-31 16:33:27.249189 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:33:27.249198 | orchestrator | 2025-05-31 16:33:27.249208 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-05-31 16:33:27.249217 | orchestrator | Saturday 31 May 2025 16:32:49 +0000 (0:00:02.288) 0:01:13.234 ********** 2025-05-31 16:33:27.249227 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:33:27.249236 | orchestrator | 2025-05-31 16:33:27.249246 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-31 16:33:27.249255 | orchestrator | Saturday 31 May 2025 16:33:03 +0000 (0:00:14.420) 0:01:27.654 ********** 2025-05-31 16:33:27.249265 | orchestrator | 2025-05-31 16:33:27.249274 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-31 16:33:27.249284 | orchestrator | Saturday 31 May 2025 16:33:03 +0000 (0:00:00.225) 0:01:27.880 ********** 2025-05-31 16:33:27.249294 | orchestrator | 2025-05-31 16:33:27.249305 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-31 16:33:27.249316 | orchestrator | Saturday 31 May 2025 16:33:04 +0000 (0:00:00.846) 0:01:28.727 ********** 2025-05-31 16:33:27.249326 | orchestrator | 2025-05-31 16:33:27.249337 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-05-31 16:33:27.249348 | orchestrator | Saturday 31 May 2025 16:33:04 +0000 (0:00:00.133) 0:01:28.860 ********** 2025-05-31 16:33:27.249358 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:33:27.249369 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:33:27.249380 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:33:27.249391 | orchestrator | 2025-05-31 16:33:27.249401 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-05-31 16:33:27.249412 | orchestrator | Saturday 31 May 2025 16:33:17 +0000 (0:00:12.293) 0:01:41.154 ********** 2025-05-31 16:33:27.249422 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:33:27.249433 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:33:27.249443 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:33:27.249454 | orchestrator | 2025-05-31 16:33:27.249465 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:33:27.249482 | orchestrator | testbed-node-0 : ok=24  changed=17  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-31 16:33:27.249501 | orchestrator | testbed-node-1 : ok=11  changed=7  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-31 16:33:27.249512 | orchestrator | testbed-node-2 : ok=11  changed=7  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-31 16:33:27.249523 | orchestrator | 2025-05-31 16:33:27.249534 | orchestrator | 2025-05-31 16:33:27.249545 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 16:33:27.249556 | orchestrator | Saturday 31 May 2025 16:33:26 +0000 (0:00:09.297) 0:01:50.451 ********** 2025-05-31 16:33:27.249566 | orchestrator | =============================================================================== 2025-05-31 16:33:27.249577 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 14.42s 2025-05-31 16:33:27.249587 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 12.29s 2025-05-31 16:33:27.249598 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 9.77s 2025-05-31 16:33:27.249609 | orchestrator | magnum : Restart magnum-conductor container ----------------------------- 9.30s 2025-05-31 16:33:27.249620 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 7.01s 2025-05-31 16:33:27.249631 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.37s 2025-05-31 16:33:27.249642 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.30s 2025-05-31 16:33:27.249652 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.11s 2025-05-31 16:33:27.249665 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.91s 2025-05-31 16:33:27.249675 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.66s 2025-05-31 16:33:27.249684 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.56s 2025-05-31 16:33:27.249694 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.49s 2025-05-31 16:33:27.249703 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.24s 2025-05-31 16:33:27.249713 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 3.10s 2025-05-31 16:33:27.249722 | orchestrator | magnum : Copying over config.json files for services -------------------- 3.03s 2025-05-31 16:33:27.249732 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.97s 2025-05-31 16:33:27.249741 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.35s 2025-05-31 16:33:27.249751 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.29s 2025-05-31 16:33:27.249760 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 2.16s 2025-05-31 16:33:27.249769 | orchestrator | magnum : Copying over existing policy file ------------------------------ 1.78s 2025-05-31 16:33:27.249779 | orchestrator | 2025-05-31 16:33:27 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:33:27.249789 | orchestrator | 2025-05-31 16:33:27 | INFO  | Task 1a9424ce-3c9e-4555-8474-ef427821be6b is in state STARTED 2025-05-31 16:33:27.249799 | orchestrator | 2025-05-31 16:33:27 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:33:30.291427 | orchestrator | 2025-05-31 16:33:30 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:33:30.291501 | orchestrator | 2025-05-31 16:33:30 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:33:30.291938 | orchestrator | 2025-05-31 16:33:30 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:33:30.292583 | orchestrator | 2025-05-31 16:33:30 | INFO  | Task 1a9424ce-3c9e-4555-8474-ef427821be6b is in state STARTED 2025-05-31 16:33:30.293119 | orchestrator | 2025-05-31 16:33:30 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:33:30.293242 | orchestrator | 2025-05-31 16:33:30 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:33:33.319994 | orchestrator | 2025-05-31 16:33:33 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:33:33.321071 | orchestrator | 2025-05-31 16:33:33 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:33:33.321647 | orchestrator | 2025-05-31 16:33:33 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:33:33.322379 | orchestrator | 2025-05-31 16:33:33 | INFO  | Task 1a9424ce-3c9e-4555-8474-ef427821be6b is in state STARTED 2025-05-31 16:33:33.323092 | orchestrator | 2025-05-31 16:33:33 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:33:33.323457 | orchestrator | 2025-05-31 16:33:33 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:33:36.361265 | orchestrator | 2025-05-31 16:33:36 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:33:36.362968 | orchestrator | 2025-05-31 16:33:36 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:33:36.364620 | orchestrator | 2025-05-31 16:33:36 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:33:36.365986 | orchestrator | 2025-05-31 16:33:36 | INFO  | Task 1a9424ce-3c9e-4555-8474-ef427821be6b is in state STARTED 2025-05-31 16:33:36.367299 | orchestrator | 2025-05-31 16:33:36 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:33:36.367638 | orchestrator | 2025-05-31 16:33:36 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:33:39.404625 | orchestrator | 2025-05-31 16:33:39 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:33:39.405736 | orchestrator | 2025-05-31 16:33:39 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:33:39.407061 | orchestrator | 2025-05-31 16:33:39 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:33:39.408744 | orchestrator | 2025-05-31 16:33:39 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:33:39.409832 | orchestrator | 2025-05-31 16:33:39 | INFO  | Task 1a9424ce-3c9e-4555-8474-ef427821be6b is in state SUCCESS 2025-05-31 16:33:39.411359 | orchestrator | 2025-05-31 16:33:39 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:33:39.411393 | orchestrator | 2025-05-31 16:33:39 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:33:42.451708 | orchestrator | 2025-05-31 16:33:42 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:33:42.452334 | orchestrator | 2025-05-31 16:33:42 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:33:42.452790 | orchestrator | 2025-05-31 16:33:42 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:33:42.453497 | orchestrator | 2025-05-31 16:33:42 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:33:42.454199 | orchestrator | 2025-05-31 16:33:42 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:33:42.454370 | orchestrator | 2025-05-31 16:33:42 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:33:45.493533 | orchestrator | 2025-05-31 16:33:45 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:33:45.494995 | orchestrator | 2025-05-31 16:33:45 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:33:45.495427 | orchestrator | 2025-05-31 16:33:45 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:33:45.495965 | orchestrator | 2025-05-31 16:33:45 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:33:45.497046 | orchestrator | 2025-05-31 16:33:45 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:33:45.497117 | orchestrator | 2025-05-31 16:33:45 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:33:48.525997 | orchestrator | 2025-05-31 16:33:48 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:33:48.527621 | orchestrator | 2025-05-31 16:33:48 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:33:48.529394 | orchestrator | 2025-05-31 16:33:48 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:33:48.530815 | orchestrator | 2025-05-31 16:33:48 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:33:48.531782 | orchestrator | 2025-05-31 16:33:48 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:33:48.531807 | orchestrator | 2025-05-31 16:33:48 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:33:51.577270 | orchestrator | 2025-05-31 16:33:51 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:33:51.578490 | orchestrator | 2025-05-31 16:33:51 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:33:51.579814 | orchestrator | 2025-05-31 16:33:51 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:33:51.580850 | orchestrator | 2025-05-31 16:33:51 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:33:51.582100 | orchestrator | 2025-05-31 16:33:51 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:33:51.582142 | orchestrator | 2025-05-31 16:33:51 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:33:54.622368 | orchestrator | 2025-05-31 16:33:54 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:33:54.625578 | orchestrator | 2025-05-31 16:33:54 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:33:54.627782 | orchestrator | 2025-05-31 16:33:54 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:33:54.630441 | orchestrator | 2025-05-31 16:33:54 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:33:54.632250 | orchestrator | 2025-05-31 16:33:54 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:33:54.632452 | orchestrator | 2025-05-31 16:33:54 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:33:57.684147 | orchestrator | 2025-05-31 16:33:57 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:33:57.685992 | orchestrator | 2025-05-31 16:33:57 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:33:57.688576 | orchestrator | 2025-05-31 16:33:57 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:33:57.691698 | orchestrator | 2025-05-31 16:33:57 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:33:57.693102 | orchestrator | 2025-05-31 16:33:57 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:33:57.693139 | orchestrator | 2025-05-31 16:33:57 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:34:00.733435 | orchestrator | 2025-05-31 16:34:00 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:34:00.738110 | orchestrator | 2025-05-31 16:34:00 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:34:00.741265 | orchestrator | 2025-05-31 16:34:00 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:34:00.741301 | orchestrator | 2025-05-31 16:34:00 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:34:00.741807 | orchestrator | 2025-05-31 16:34:00 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:34:00.742390 | orchestrator | 2025-05-31 16:34:00 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:34:03.776255 | orchestrator | 2025-05-31 16:34:03 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:34:03.781540 | orchestrator | 2025-05-31 16:34:03 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:34:03.781573 | orchestrator | 2025-05-31 16:34:03 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:34:03.781585 | orchestrator | 2025-05-31 16:34:03 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:34:03.781596 | orchestrator | 2025-05-31 16:34:03 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:34:03.781607 | orchestrator | 2025-05-31 16:34:03 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:34:06.829076 | orchestrator | 2025-05-31 16:34:06 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:34:06.829962 | orchestrator | 2025-05-31 16:34:06 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:34:06.831316 | orchestrator | 2025-05-31 16:34:06 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:34:06.832269 | orchestrator | 2025-05-31 16:34:06 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:34:06.835684 | orchestrator | 2025-05-31 16:34:06 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:34:06.835716 | orchestrator | 2025-05-31 16:34:06 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:34:09.869855 | orchestrator | 2025-05-31 16:34:09 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:34:09.871222 | orchestrator | 2025-05-31 16:34:09 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:34:09.874523 | orchestrator | 2025-05-31 16:34:09 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:34:09.876333 | orchestrator | 2025-05-31 16:34:09 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:34:09.878779 | orchestrator | 2025-05-31 16:34:09 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:34:09.879189 | orchestrator | 2025-05-31 16:34:09 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:34:12.912362 | orchestrator | 2025-05-31 16:34:12 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:34:12.912723 | orchestrator | 2025-05-31 16:34:12 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:34:12.913463 | orchestrator | 2025-05-31 16:34:12 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:34:12.914001 | orchestrator | 2025-05-31 16:34:12 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:34:12.915647 | orchestrator | 2025-05-31 16:34:12 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:34:12.915696 | orchestrator | 2025-05-31 16:34:12 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:34:15.944011 | orchestrator | 2025-05-31 16:34:15 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:34:15.944172 | orchestrator | 2025-05-31 16:34:15 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:34:15.944685 | orchestrator | 2025-05-31 16:34:15 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:34:15.945249 | orchestrator | 2025-05-31 16:34:15 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:34:15.946328 | orchestrator | 2025-05-31 16:34:15 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:34:15.946352 | orchestrator | 2025-05-31 16:34:15 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:34:18.976068 | orchestrator | 2025-05-31 16:34:18 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:34:18.976239 | orchestrator | 2025-05-31 16:34:18 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:34:18.976738 | orchestrator | 2025-05-31 16:34:18 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:34:18.977386 | orchestrator | 2025-05-31 16:34:18 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:34:18.978012 | orchestrator | 2025-05-31 16:34:18 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:34:18.978080 | orchestrator | 2025-05-31 16:34:18 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:34:22.013243 | orchestrator | 2025-05-31 16:34:22 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:34:22.013386 | orchestrator | 2025-05-31 16:34:22 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:34:22.014630 | orchestrator | 2025-05-31 16:34:22 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:34:22.014666 | orchestrator | 2025-05-31 16:34:22 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:34:22.015510 | orchestrator | 2025-05-31 16:34:22 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:34:22.015534 | orchestrator | 2025-05-31 16:34:22 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:34:25.044764 | orchestrator | 2025-05-31 16:34:25 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:34:25.044836 | orchestrator | 2025-05-31 16:34:25 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:34:25.046146 | orchestrator | 2025-05-31 16:34:25 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:34:25.046817 | orchestrator | 2025-05-31 16:34:25 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:34:25.047709 | orchestrator | 2025-05-31 16:34:25 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:34:25.047786 | orchestrator | 2025-05-31 16:34:25 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:34:28.082309 | orchestrator | 2025-05-31 16:34:28 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:34:28.082736 | orchestrator | 2025-05-31 16:34:28 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:34:28.083489 | orchestrator | 2025-05-31 16:34:28 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:34:28.084177 | orchestrator | 2025-05-31 16:34:28 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:34:28.085041 | orchestrator | 2025-05-31 16:34:28 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:34:28.085074 | orchestrator | 2025-05-31 16:34:28 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:34:31.121833 | orchestrator | 2025-05-31 16:34:31 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:34:31.122158 | orchestrator | 2025-05-31 16:34:31 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:34:31.122787 | orchestrator | 2025-05-31 16:34:31 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:34:31.123627 | orchestrator | 2025-05-31 16:34:31 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:34:31.124145 | orchestrator | 2025-05-31 16:34:31 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:34:31.124167 | orchestrator | 2025-05-31 16:34:31 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:34:34.169246 | orchestrator | 2025-05-31 16:34:34 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:34:34.169652 | orchestrator | 2025-05-31 16:34:34 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:34:34.170271 | orchestrator | 2025-05-31 16:34:34 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:34:34.170973 | orchestrator | 2025-05-31 16:34:34 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:34:34.171716 | orchestrator | 2025-05-31 16:34:34 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:34:34.171758 | orchestrator | 2025-05-31 16:34:34 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:34:37.207555 | orchestrator | 2025-05-31 16:34:37 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:34:37.207604 | orchestrator | 2025-05-31 16:34:37 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:34:37.207618 | orchestrator | 2025-05-31 16:34:37 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:34:37.207629 | orchestrator | 2025-05-31 16:34:37 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:34:37.207640 | orchestrator | 2025-05-31 16:34:37 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:34:37.207651 | orchestrator | 2025-05-31 16:34:37 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:34:40.232764 | orchestrator | 2025-05-31 16:34:40 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:34:40.233035 | orchestrator | 2025-05-31 16:34:40 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:34:40.233754 | orchestrator | 2025-05-31 16:34:40 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:34:40.234442 | orchestrator | 2025-05-31 16:34:40 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:34:40.235084 | orchestrator | 2025-05-31 16:34:40 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:34:40.235271 | orchestrator | 2025-05-31 16:34:40 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:34:43.263130 | orchestrator | 2025-05-31 16:34:43 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:34:43.264446 | orchestrator | 2025-05-31 16:34:43 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:34:43.264900 | orchestrator | 2025-05-31 16:34:43 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:34:43.265430 | orchestrator | 2025-05-31 16:34:43 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:34:43.265990 | orchestrator | 2025-05-31 16:34:43 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:34:43.266117 | orchestrator | 2025-05-31 16:34:43 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:34:46.298197 | orchestrator | 2025-05-31 16:34:46 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:34:46.299696 | orchestrator | 2025-05-31 16:34:46 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:34:46.300183 | orchestrator | 2025-05-31 16:34:46 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:34:46.300646 | orchestrator | 2025-05-31 16:34:46 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state STARTED 2025-05-31 16:34:46.301180 | orchestrator | 2025-05-31 16:34:46 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:34:46.301255 | orchestrator | 2025-05-31 16:34:46 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:34:49.329270 | orchestrator | 2025-05-31 16:34:49 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:34:49.329610 | orchestrator | 2025-05-31 16:34:49 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:34:49.330373 | orchestrator | 2025-05-31 16:34:49 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:34:49.331851 | orchestrator | 2025-05-31 16:34:49 | INFO  | Task 56ba4c1a-43fc-4736-9660-a157a437ca22 is in state SUCCESS 2025-05-31 16:34:49.332038 | orchestrator | 2025-05-31 16:34:49.332089 | orchestrator | 2025-05-31 16:34:49.332104 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-31 16:34:49.332116 | orchestrator | 2025-05-31 16:34:49.332127 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-31 16:34:49.332138 | orchestrator | Saturday 31 May 2025 16:33:09 +0000 (0:00:00.212) 0:00:00.212 ********** 2025-05-31 16:34:49.332150 | orchestrator | ok: [testbed-manager] 2025-05-31 16:34:49.332162 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:34:49.332173 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:34:49.332232 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:34:49.332244 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:34:49.332255 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:34:49.332265 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:34:49.332276 | orchestrator | 2025-05-31 16:34:49.332301 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-31 16:34:49.332313 | orchestrator | Saturday 31 May 2025 16:33:09 +0000 (0:00:00.650) 0:00:00.862 ********** 2025-05-31 16:34:49.332324 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-05-31 16:34:49.332336 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-05-31 16:34:49.332347 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-05-31 16:34:49.332357 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-05-31 16:34:49.332368 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-05-31 16:34:49.332379 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-05-31 16:34:49.332389 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-05-31 16:34:49.332400 | orchestrator | 2025-05-31 16:34:49.332411 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-05-31 16:34:49.332422 | orchestrator | 2025-05-31 16:34:49.332457 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-05-31 16:34:49.332468 | orchestrator | Saturday 31 May 2025 16:33:10 +0000 (0:00:00.778) 0:00:01.641 ********** 2025-05-31 16:34:49.332479 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:34:49.332491 | orchestrator | 2025-05-31 16:34:49.332501 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-05-31 16:34:49.332512 | orchestrator | Saturday 31 May 2025 16:33:11 +0000 (0:00:01.117) 0:00:02.759 ********** 2025-05-31 16:34:49.332522 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2025-05-31 16:34:49.332533 | orchestrator | 2025-05-31 16:34:49.332544 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-05-31 16:34:49.332554 | orchestrator | Saturday 31 May 2025 16:33:14 +0000 (0:00:03.086) 0:00:05.845 ********** 2025-05-31 16:34:49.332565 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-05-31 16:34:49.332577 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-05-31 16:34:49.332588 | orchestrator | 2025-05-31 16:34:49.332598 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-05-31 16:34:49.332609 | orchestrator | Saturday 31 May 2025 16:33:21 +0000 (0:00:06.416) 0:00:12.262 ********** 2025-05-31 16:34:49.332620 | orchestrator | ok: [testbed-manager] => (item=service) 2025-05-31 16:34:49.332630 | orchestrator | 2025-05-31 16:34:49.332641 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-05-31 16:34:49.332651 | orchestrator | Saturday 31 May 2025 16:33:24 +0000 (0:00:02.850) 0:00:15.113 ********** 2025-05-31 16:34:49.332662 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-31 16:34:49.332675 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2025-05-31 16:34:49.332687 | orchestrator | 2025-05-31 16:34:49.332699 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-05-31 16:34:49.332711 | orchestrator | Saturday 31 May 2025 16:33:27 +0000 (0:00:03.432) 0:00:18.545 ********** 2025-05-31 16:34:49.332723 | orchestrator | ok: [testbed-manager] => (item=admin) 2025-05-31 16:34:49.332735 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2025-05-31 16:34:49.332747 | orchestrator | 2025-05-31 16:34:49.332759 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-05-31 16:34:49.332771 | orchestrator | Saturday 31 May 2025 16:33:33 +0000 (0:00:05.625) 0:00:24.171 ********** 2025-05-31 16:34:49.332782 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2025-05-31 16:34:49.332794 | orchestrator | 2025-05-31 16:34:49.332806 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:34:49.332817 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:34:49.332830 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:34:49.332843 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:34:49.332855 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:34:49.332885 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:34:49.332910 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:34:49.332923 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:34:49.332945 | orchestrator | 2025-05-31 16:34:49.332957 | orchestrator | 2025-05-31 16:34:49.332969 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 16:34:49.332981 | orchestrator | Saturday 31 May 2025 16:33:37 +0000 (0:00:04.517) 0:00:28.689 ********** 2025-05-31 16:34:49.332993 | orchestrator | =============================================================================== 2025-05-31 16:34:49.333005 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.42s 2025-05-31 16:34:49.333023 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.63s 2025-05-31 16:34:49.333034 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.52s 2025-05-31 16:34:49.333045 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.43s 2025-05-31 16:34:49.333055 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.09s 2025-05-31 16:34:49.333066 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.85s 2025-05-31 16:34:49.333076 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.12s 2025-05-31 16:34:49.333087 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.78s 2025-05-31 16:34:49.333098 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.65s 2025-05-31 16:34:49.333109 | orchestrator | 2025-05-31 16:34:49.333200 | orchestrator | 2025-05-31 16:34:49 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:34:49.333215 | orchestrator | 2025-05-31 16:34:49 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:34:52.364386 | orchestrator | 2025-05-31 16:34:52 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:34:52.364476 | orchestrator | 2025-05-31 16:34:52 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:34:52.364812 | orchestrator | 2025-05-31 16:34:52 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:34:52.365325 | orchestrator | 2025-05-31 16:34:52 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:34:52.366152 | orchestrator | 2025-05-31 16:34:52 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:34:52.366176 | orchestrator | 2025-05-31 16:34:52 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:34:55.390235 | orchestrator | 2025-05-31 16:34:55 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:34:55.390534 | orchestrator | 2025-05-31 16:34:55 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:34:55.390989 | orchestrator | 2025-05-31 16:34:55 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:34:55.391561 | orchestrator | 2025-05-31 16:34:55 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:34:55.392063 | orchestrator | 2025-05-31 16:34:55 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:34:55.392134 | orchestrator | 2025-05-31 16:34:55 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:34:58.416401 | orchestrator | 2025-05-31 16:34:58 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:34:58.416501 | orchestrator | 2025-05-31 16:34:58 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:34:58.416851 | orchestrator | 2025-05-31 16:34:58 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:34:58.417302 | orchestrator | 2025-05-31 16:34:58 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:34:58.417741 | orchestrator | 2025-05-31 16:34:58 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:34:58.418165 | orchestrator | 2025-05-31 16:34:58 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:35:01.452190 | orchestrator | 2025-05-31 16:35:01 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:35:01.452284 | orchestrator | 2025-05-31 16:35:01 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:35:01.455835 | orchestrator | 2025-05-31 16:35:01 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:35:01.456296 | orchestrator | 2025-05-31 16:35:01 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:35:01.456687 | orchestrator | 2025-05-31 16:35:01 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:35:01.456708 | orchestrator | 2025-05-31 16:35:01 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:35:04.479730 | orchestrator | 2025-05-31 16:35:04 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:35:04.484624 | orchestrator | 2025-05-31 16:35:04 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:35:04.485980 | orchestrator | 2025-05-31 16:35:04 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:35:04.486366 | orchestrator | 2025-05-31 16:35:04 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:35:04.486963 | orchestrator | 2025-05-31 16:35:04 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:35:04.486986 | orchestrator | 2025-05-31 16:35:04 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:35:07.513071 | orchestrator | 2025-05-31 16:35:07 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:35:07.516648 | orchestrator | 2025-05-31 16:35:07 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:35:07.516693 | orchestrator | 2025-05-31 16:35:07 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:35:07.516705 | orchestrator | 2025-05-31 16:35:07 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:35:07.516716 | orchestrator | 2025-05-31 16:35:07 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:35:07.516727 | orchestrator | 2025-05-31 16:35:07 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:35:10.553496 | orchestrator | 2025-05-31 16:35:10 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:35:10.554291 | orchestrator | 2025-05-31 16:35:10 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:35:10.554821 | orchestrator | 2025-05-31 16:35:10 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:35:10.556200 | orchestrator | 2025-05-31 16:35:10 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:35:10.557650 | orchestrator | 2025-05-31 16:35:10 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:35:10.557684 | orchestrator | 2025-05-31 16:35:10 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:35:13.590988 | orchestrator | 2025-05-31 16:35:13 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:35:13.591061 | orchestrator | 2025-05-31 16:35:13 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:35:13.591100 | orchestrator | 2025-05-31 16:35:13 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:35:13.591114 | orchestrator | 2025-05-31 16:35:13 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:35:13.593244 | orchestrator | 2025-05-31 16:35:13 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:35:13.593282 | orchestrator | 2025-05-31 16:35:13 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:35:16.621085 | orchestrator | 2025-05-31 16:35:16 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:35:16.621191 | orchestrator | 2025-05-31 16:35:16 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:35:16.626880 | orchestrator | 2025-05-31 16:35:16 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:35:16.627761 | orchestrator | 2025-05-31 16:35:16 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:35:16.633465 | orchestrator | 2025-05-31 16:35:16 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:35:16.633511 | orchestrator | 2025-05-31 16:35:16 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:35:19.663534 | orchestrator | 2025-05-31 16:35:19 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:35:19.663714 | orchestrator | 2025-05-31 16:35:19 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:35:19.664463 | orchestrator | 2025-05-31 16:35:19 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:35:19.664946 | orchestrator | 2025-05-31 16:35:19 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:35:19.665537 | orchestrator | 2025-05-31 16:35:19 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:35:19.665561 | orchestrator | 2025-05-31 16:35:19 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:35:22.698330 | orchestrator | 2025-05-31 16:35:22 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:35:22.699246 | orchestrator | 2025-05-31 16:35:22 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:35:22.700434 | orchestrator | 2025-05-31 16:35:22 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:35:22.701319 | orchestrator | 2025-05-31 16:35:22 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:35:22.702119 | orchestrator | 2025-05-31 16:35:22 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:35:22.702333 | orchestrator | 2025-05-31 16:35:22 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:35:25.738470 | orchestrator | 2025-05-31 16:35:25 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:35:25.742247 | orchestrator | 2025-05-31 16:35:25 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:35:25.742797 | orchestrator | 2025-05-31 16:35:25 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:35:25.743476 | orchestrator | 2025-05-31 16:35:25 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:35:25.744287 | orchestrator | 2025-05-31 16:35:25 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:35:25.744310 | orchestrator | 2025-05-31 16:35:25 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:35:28.771613 | orchestrator | 2025-05-31 16:35:28 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:35:28.772022 | orchestrator | 2025-05-31 16:35:28 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:35:28.772573 | orchestrator | 2025-05-31 16:35:28 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:35:28.773462 | orchestrator | 2025-05-31 16:35:28 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:35:28.773986 | orchestrator | 2025-05-31 16:35:28 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:35:28.774008 | orchestrator | 2025-05-31 16:35:28 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:35:31.816994 | orchestrator | 2025-05-31 16:35:31 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:35:31.817364 | orchestrator | 2025-05-31 16:35:31 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:35:31.818925 | orchestrator | 2025-05-31 16:35:31 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:35:31.820108 | orchestrator | 2025-05-31 16:35:31 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:35:31.821432 | orchestrator | 2025-05-31 16:35:31 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:35:31.821545 | orchestrator | 2025-05-31 16:35:31 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:35:34.879156 | orchestrator | 2025-05-31 16:35:34 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:35:34.879241 | orchestrator | 2025-05-31 16:35:34 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:35:34.881240 | orchestrator | 2025-05-31 16:35:34 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:35:34.883362 | orchestrator | 2025-05-31 16:35:34 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:35:34.885031 | orchestrator | 2025-05-31 16:35:34 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:35:34.885321 | orchestrator | 2025-05-31 16:35:34 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:35:37.958907 | orchestrator | 2025-05-31 16:35:37 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:35:37.962877 | orchestrator | 2025-05-31 16:35:37 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:35:37.962916 | orchestrator | 2025-05-31 16:35:37 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:35:37.964499 | orchestrator | 2025-05-31 16:35:37 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:35:37.966107 | orchestrator | 2025-05-31 16:35:37 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:35:37.966133 | orchestrator | 2025-05-31 16:35:37 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:35:41.029503 | orchestrator | 2025-05-31 16:35:41 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:35:41.031006 | orchestrator | 2025-05-31 16:35:41 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:35:41.031895 | orchestrator | 2025-05-31 16:35:41 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:35:41.037084 | orchestrator | 2025-05-31 16:35:41 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:35:41.037137 | orchestrator | 2025-05-31 16:35:41 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:35:41.037438 | orchestrator | 2025-05-31 16:35:41 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:35:44.081278 | orchestrator | 2025-05-31 16:35:44 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:35:44.081387 | orchestrator | 2025-05-31 16:35:44 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:35:44.081401 | orchestrator | 2025-05-31 16:35:44 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:35:44.081413 | orchestrator | 2025-05-31 16:35:44 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:35:44.081424 | orchestrator | 2025-05-31 16:35:44 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:35:44.081435 | orchestrator | 2025-05-31 16:35:44 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:35:47.120807 | orchestrator | 2025-05-31 16:35:47 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:35:47.121572 | orchestrator | 2025-05-31 16:35:47 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:35:47.122365 | orchestrator | 2025-05-31 16:35:47 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:35:47.123011 | orchestrator | 2025-05-31 16:35:47 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:35:47.123884 | orchestrator | 2025-05-31 16:35:47 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:35:47.123909 | orchestrator | 2025-05-31 16:35:47 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:35:50.160133 | orchestrator | 2025-05-31 16:35:50 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:35:50.162354 | orchestrator | 2025-05-31 16:35:50 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:35:50.164058 | orchestrator | 2025-05-31 16:35:50 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:35:50.166889 | orchestrator | 2025-05-31 16:35:50 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:35:50.167647 | orchestrator | 2025-05-31 16:35:50 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:35:50.167671 | orchestrator | 2025-05-31 16:35:50 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:35:53.209032 | orchestrator | 2025-05-31 16:35:53 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:35:53.213232 | orchestrator | 2025-05-31 16:35:53 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:35:53.214113 | orchestrator | 2025-05-31 16:35:53 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:35:53.217131 | orchestrator | 2025-05-31 16:35:53 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:35:53.218976 | orchestrator | 2025-05-31 16:35:53 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:35:53.219650 | orchestrator | 2025-05-31 16:35:53 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:35:56.259600 | orchestrator | 2025-05-31 16:35:56 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:35:56.260910 | orchestrator | 2025-05-31 16:35:56 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state STARTED 2025-05-31 16:35:56.262466 | orchestrator | 2025-05-31 16:35:56 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:35:56.264034 | orchestrator | 2025-05-31 16:35:56 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:35:56.265222 | orchestrator | 2025-05-31 16:35:56 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:35:56.265251 | orchestrator | 2025-05-31 16:35:56 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:35:59.322458 | orchestrator | 2025-05-31 16:35:59 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:35:59.323719 | orchestrator | 2025-05-31 16:35:59 | INFO  | Task ce059453-2065-4b7a-8f40-a703a68ab0bb is in state SUCCESS 2025-05-31 16:35:59.328524 | orchestrator | 2025-05-31 16:35:59.328573 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-05-31 16:35:59.328587 | orchestrator | 2025-05-31 16:35:59.328680 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-05-31 16:35:59.328759 | orchestrator | Saturday 31 May 2025 16:28:33 +0000 (0:00:00.210) 0:00:00.210 ********** 2025-05-31 16:35:59.328819 | orchestrator | changed: [localhost] 2025-05-31 16:35:59.328894 | orchestrator | 2025-05-31 16:35:59.328906 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-05-31 16:35:59.328917 | orchestrator | Saturday 31 May 2025 16:28:34 +0000 (0:00:00.615) 0:00:00.826 ********** 2025-05-31 16:35:59.328935 | orchestrator | 2025-05-31 16:35:59.328954 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-31 16:35:59.328972 | orchestrator | 2025-05-31 16:35:59.328989 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-31 16:35:59.329007 | orchestrator | 2025-05-31 16:35:59.329024 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-31 16:35:59.329043 | orchestrator | 2025-05-31 16:35:59.329083 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-31 16:35:59.329203 | orchestrator | 2025-05-31 16:35:59.329226 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-31 16:35:59.329239 | orchestrator | 2025-05-31 16:35:59.329251 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-31 16:35:59.329263 | orchestrator | 2025-05-31 16:35:59.329274 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-31 16:35:59.329286 | orchestrator | 2025-05-31 16:35:59.329298 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-31 16:35:59.329310 | orchestrator | changed: [localhost] 2025-05-31 16:35:59.329322 | orchestrator | 2025-05-31 16:35:59.329334 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-05-31 16:35:59.329346 | orchestrator | Saturday 31 May 2025 16:34:33 +0000 (0:05:59.607) 0:06:00.433 ********** 2025-05-31 16:35:59.329358 | orchestrator | changed: [localhost] 2025-05-31 16:35:59.329370 | orchestrator | 2025-05-31 16:35:59.329382 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-31 16:35:59.329394 | orchestrator | 2025-05-31 16:35:59.329405 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-31 16:35:59.329417 | orchestrator | Saturday 31 May 2025 16:34:47 +0000 (0:00:13.152) 0:06:13.586 ********** 2025-05-31 16:35:59.329429 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:35:59.329441 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:35:59.329551 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:35:59.329566 | orchestrator | 2025-05-31 16:35:59.329577 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-31 16:35:59.329588 | orchestrator | Saturday 31 May 2025 16:34:47 +0000 (0:00:00.412) 0:06:13.999 ********** 2025-05-31 16:35:59.329599 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-05-31 16:35:59.329610 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-05-31 16:35:59.329621 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-05-31 16:35:59.329632 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-05-31 16:35:59.329667 | orchestrator | 2025-05-31 16:35:59.329678 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-05-31 16:35:59.329689 | orchestrator | skipping: no hosts matched 2025-05-31 16:35:59.329700 | orchestrator | 2025-05-31 16:35:59.329711 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:35:59.329722 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:35:59.329736 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:35:59.329749 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:35:59.329760 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:35:59.329770 | orchestrator | 2025-05-31 16:35:59.329781 | orchestrator | 2025-05-31 16:35:59.329791 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 16:35:59.329802 | orchestrator | Saturday 31 May 2025 16:34:48 +0000 (0:00:00.582) 0:06:14.581 ********** 2025-05-31 16:35:59.329813 | orchestrator | =============================================================================== 2025-05-31 16:35:59.329869 | orchestrator | Download ironic-agent initramfs --------------------------------------- 359.61s 2025-05-31 16:35:59.329880 | orchestrator | Download ironic-agent kernel ------------------------------------------- 13.15s 2025-05-31 16:35:59.329890 | orchestrator | Ensure the destination directory exists --------------------------------- 0.61s 2025-05-31 16:35:59.329901 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.58s 2025-05-31 16:35:59.329911 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.41s 2025-05-31 16:35:59.329921 | orchestrator | 2025-05-31 16:35:59.329992 | orchestrator | 2025-05-31 16:35:59.330003 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-31 16:35:59.330014 | orchestrator | 2025-05-31 16:35:59.330074 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-31 16:35:59.330088 | orchestrator | Saturday 31 May 2025 16:32:00 +0000 (0:00:00.209) 0:00:00.209 ********** 2025-05-31 16:35:59.330108 | orchestrator | ok: [testbed-manager] 2025-05-31 16:35:59.330119 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:35:59.330130 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:35:59.330141 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:35:59.330151 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:35:59.330162 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:35:59.330172 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:35:59.330183 | orchestrator | 2025-05-31 16:35:59.330194 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-31 16:35:59.330232 | orchestrator | Saturday 31 May 2025 16:32:01 +0000 (0:00:00.605) 0:00:00.814 ********** 2025-05-31 16:35:59.330244 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-05-31 16:35:59.330255 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-05-31 16:35:59.330265 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-05-31 16:35:59.330277 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-05-31 16:35:59.330287 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-05-31 16:35:59.330298 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-05-31 16:35:59.330308 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-05-31 16:35:59.330319 | orchestrator | 2025-05-31 16:35:59.330383 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-05-31 16:35:59.330394 | orchestrator | 2025-05-31 16:35:59.330405 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-05-31 16:35:59.330415 | orchestrator | Saturday 31 May 2025 16:32:02 +0000 (0:00:00.644) 0:00:01.459 ********** 2025-05-31 16:35:59.330480 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:35:59.330494 | orchestrator | 2025-05-31 16:35:59.330550 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-05-31 16:35:59.330569 | orchestrator | Saturday 31 May 2025 16:32:03 +0000 (0:00:01.132) 0:00:02.591 ********** 2025-05-31 16:35:59.330591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-31 16:35:59.330616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-31 16:35:59.330636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-31 16:35:59.330681 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-31 16:35:59.330703 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-31 16:35:59.330736 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-31 16:35:59.330757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 16:35:59.330770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 16:35:59.330781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 16:35:59.330818 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-31 16:35:59.330885 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 16:35:59.330906 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 16:35:59.330918 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.330930 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.330942 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.330995 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.331006 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 16:35:59.331032 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.331051 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.331062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:35:59.331073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:35:59.331084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:35:59.331095 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 16:35:59.331142 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 16:35:59.331155 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.331220 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.331251 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 16:35:59.331265 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 16:35:59.331278 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-31 16:35:59.331292 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-31 16:35:59.331352 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-31 16:35:59.331379 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-31 16:35:59.331391 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.331402 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.331414 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.331425 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.331436 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-31 16:35:59.331466 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-31 16:35:59.331479 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.331490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:35:59.331501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:35:59.331512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:35:59.331523 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 16:35:59.331554 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-31 16:35:59.331567 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-31 16:35:59.331578 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.331590 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.331662 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-31 16:35:59.331675 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-31 16:35:59.331694 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.331720 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:35:59.331733 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.331744 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-31 16:35:59.331755 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.331766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 16:35:59.331778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-31 16:35:59.331897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-31 16:35:59.331926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 16:35:59.331947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-31 16:35:59.331967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-31 16:35:59.331979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 16:35:59.332014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-31 16:35:59.332027 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-31 16:35:59.332038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-31 16:35:59.332049 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.332060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:35:59.332078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.332090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-31 16:35:59.332113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:35:59.332126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.332137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.332148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-31 16:35:59.332160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.332171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:35:59.332188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.332199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-31 16:35:59.332224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.332295 | orchestrator | 2025-05-31 16:35:59.332309 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-05-31 16:35:59.332320 | orchestrator | Saturday 31 May 2025 16:32:06 +0000 (0:00:03.160) 0:00:05.752 ********** 2025-05-31 16:35:59.332332 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:35:59.332343 | orchestrator | 2025-05-31 16:35:59.332354 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-05-31 16:35:59.332364 | orchestrator | Saturday 31 May 2025 16:32:07 +0000 (0:00:01.521) 0:00:07.273 ********** 2025-05-31 16:35:59.332374 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-31 16:35:59.332385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 16:35:59.332406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 16:35:59.332416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 16:35:59.332426 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 16:35:59.332449 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 16:35:59.332460 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 16:35:59.332470 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 16:35:59.332480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:35:59.332490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:35:59.332506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:35:59.332516 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 16:35:59.332526 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 16:35:59.332547 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 16:35:59.332558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:35:59.332568 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 16:35:59.332578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:35:59.332594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:35:59.332604 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-31 16:35:59.332615 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-31 16:35:59.332636 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-31 16:35:59.332646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 16:35:59.332656 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-31 16:35:59.332666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 16:35:59.332682 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:35:59.332692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 16:35:59.332702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:35:59.332721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:35:59.332732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:35:59.332742 | orchestrator | 2025-05-31 16:35:59.332751 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-05-31 16:35:59.332761 | orchestrator | Saturday 31 May 2025 16:32:13 +0000 (0:00:05.424) 0:00:12.698 ********** 2025-05-31 16:35:59.332771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-31 16:35:59.332787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.332797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.332807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-31 16:35:59.332817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.332858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-31 16:35:59.332890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.332907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.332918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-31 16:35:59.332934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.332944 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-31 16:35:59.332954 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-31 16:35:59.332965 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-31 16:35:59.333394 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-31 16:35:59.333413 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.333431 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:35:59.333441 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:35:59.333451 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:35:59.333461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-31 16:35:59.333472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.333482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.333492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-31 16:35:59.333507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.333523 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:35:59.333534 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-31 16:35:59.333575 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-31 16:35:59.333587 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-31 16:35:59.333597 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:35:59.333607 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-31 16:35:59.333617 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-31 16:35:59.333626 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-31 16:35:59.333636 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:35:59.333646 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-31 16:35:59.333671 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-31 16:35:59.333689 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-31 16:35:59.333699 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:35:59.333709 | orchestrator | 2025-05-31 16:35:59.333719 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-05-31 16:35:59.333728 | orchestrator | Saturday 31 May 2025 16:32:15 +0000 (0:00:01.992) 0:00:14.691 ********** 2025-05-31 16:35:59.333739 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-31 16:35:59.333749 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-31 16:35:59.333759 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-31 16:35:59.333769 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-31 16:35:59.333817 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.333913 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:35:59.333926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-31 16:35:59.333936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.333946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.333956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-31 16:35:59.333966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.333975 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:35:59.333985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-31 16:35:59.334007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.334071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.334109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-31 16:35:59.334121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.334132 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:35:59.334143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-31 16:35:59.334153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.334165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.334175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-31 16:35:59.334216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.334348 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:35:59.334359 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-31 16:35:59.334371 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-31 16:35:59.334382 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-31 16:35:59.334392 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:35:59.334402 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-31 16:35:59.334412 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-31 16:35:59.334422 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-31 16:35:59.334438 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:35:59.334453 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-31 16:35:59.334470 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-31 16:35:59.334481 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-31 16:35:59.334491 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:35:59.334500 | orchestrator | 2025-05-31 16:35:59.334510 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-05-31 16:35:59.334522 | orchestrator | Saturday 31 May 2025 16:32:18 +0000 (0:00:03.135) 0:00:17.826 ********** 2025-05-31 16:35:59.334539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-31 16:35:59.334556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-31 16:35:59.334572 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-31 16:35:59.334623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-31 16:35:59.334643 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-31 16:35:59.334663 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-31 16:35:59.334681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 16:35:59.334692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 16:35:59.334709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 16:35:59.334731 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-31 16:35:59.334742 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 16:35:59.334752 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.334762 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.334772 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 16:35:59.334782 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.334798 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.334808 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 16:35:59.334875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:35:59.334903 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.334913 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.334932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:35:59.334943 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 16:35:59.334953 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 16:35:59.334970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:35:59.334991 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-31 16:35:59.335004 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-31 16:35:59.335014 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.335025 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 16:35:59.335035 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.335055 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.335065 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.335085 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 16:35:59.335096 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-31 16:35:59.335107 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-31 16:35:59.335124 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.335134 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.335144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:35:59.335163 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-31 16:35:59.335175 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-31 16:35:59.335185 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.335200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:35:59.335210 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 16:35:59.335306 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-31 16:35:59.335321 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-31 16:35:59.335331 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.335342 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.335358 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-31 16:35:59.335368 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.335378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:35:59.335388 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-31 16:35:59.335408 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.335419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 16:35:59.335429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-31 16:35:59.335445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-31 16:35:59.335455 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:35:59.335465 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-31 16:35:59.335484 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.335495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 16:35:59.335505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-31 16:35:59.335521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-31 16:35:59.335531 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-31 16:35:59.335541 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.335560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 16:35:59.335571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-31 16:35:59.335590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-31 16:35:59.335600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:35:59.335610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.335620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-31 16:35:59.335640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.335651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:35:59.335661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.335676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-31 16:35:59.335686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.335696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:35:59.335706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.335715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-31 16:35:59.335735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.335746 | orchestrator | 2025-05-31 16:35:59.335755 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-05-31 16:35:59.335765 | orchestrator | Saturday 31 May 2025 16:32:26 +0000 (0:00:07.982) 0:00:25.808 ********** 2025-05-31 16:35:59.335775 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-31 16:35:59.335792 | orchestrator | 2025-05-31 16:35:59.335802 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-05-31 16:35:59.335811 | orchestrator | Saturday 31 May 2025 16:32:26 +0000 (0:00:00.446) 0:00:26.255 ********** 2025-05-31 16:35:59.335880 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1089342, 'dev': 182, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748706249.1287603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.335895 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1089342, 'dev': 182, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748706249.1287603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.335905 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1089342, 'dev': 182, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748706249.1287603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.335915 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1089342, 'dev': 182, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748706249.1287603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.335924 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1089342, 'dev': 182, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748706249.1287603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.335968 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1089357, 'dev': 182, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748706249.1327605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.335980 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1089342, 'dev': 182, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748706249.1287603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.335997 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1089357, 'dev': 182, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748706249.1327605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.336007 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1089357, 'dev': 182, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748706249.1327605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.336017 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1089357, 'dev': 182, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748706249.1327605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.336027 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1089357, 'dev': 182, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748706249.1327605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.336036 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1089342, 'dev': 182, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748706249.1287603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 16:35:59.336061 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1089357, 'dev': 182, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748706249.1327605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.336072 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1089345, 'dev': 182, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748706249.1297605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.336088 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1089345, 'dev': 182, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748706249.1297605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.336098 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1089345, 'dev': 182, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748706249.1297605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.336108 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1089345, 'dev': 182, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748706249.1297605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.336117 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1089345, 'dev': 182, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748706249.1297605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.336127 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1089356, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1317604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.336148 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1089356, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1317604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.336166 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1089356, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1317604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.336176 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1089356, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1317604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.336186 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1089356, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1317604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.336196 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1089345, 'dev': 182, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748706249.1297605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.336233 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1089392, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1467607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.336243 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1089392, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1467607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.336263 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1089392, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1467607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.336280 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1089392, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1467607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.336290 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1089356, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1317604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.336300 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1089392, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1467607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.336310 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1089357, 'dev': 182, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748706249.1327605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 16:35:59.336319 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1089361, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1357605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.336329 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1089361, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1357605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.336735 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1089361, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1357605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.336760 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1089361, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1357605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.336768 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1089361, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1357605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.336776 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1089392, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1467607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.336784 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1089355, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1317604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.336792 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1089361, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1357605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.336800 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1089355, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1317604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.336861 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1089355, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1317604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.336878 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1089355, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1317604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.336886 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1089359, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1327605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.336895 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1089355, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1317604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.336903 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1089359, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1327605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.336911 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1089359, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1327605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.336919 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1089355, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1317604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.336941 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1089359, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1327605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.336950 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1089391, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1457608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.336958 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1089359, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1327605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.336965 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1089345, 'dev': 182, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748706249.1297605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 16:35:59.336973 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1089391, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1457608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.336981 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1089391, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1457608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.336989 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1089348, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1297605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.337011 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1089359, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1327605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.337019 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1089391, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1457608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.337027 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1089391, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1457608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.337035 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1089348, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1297605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.337043 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1089348, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1297605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.337051 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1089379, 'dev': 182, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748706249.1417606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.337059 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:35:59.337072 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1089379, 'dev': 182, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748706249.1417606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.337080 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:35:59.337096 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1089348, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1297605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.337105 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1089391, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1457608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.337113 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1089356, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1317604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 16:35:59.337121 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1089348, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1297605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.337129 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1089379, 'dev': 182, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748706249.1417606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.337137 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:35:59.337145 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1089348, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1297605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.337158 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1089379, 'dev': 182, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748706249.1417606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.337166 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:35:59.337183 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1089379, 'dev': 182, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748706249.1417606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.337191 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:35:59.337200 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1089379, 'dev': 182, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748706249.1417606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 16:35:59.337208 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:35:59.337215 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1089392, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1467607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 16:35:59.337224 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1089361, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1357605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 16:35:59.337232 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1089355, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1317604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 16:35:59.337244 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1089359, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1327605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 16:35:59.337253 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1089391, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1457608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 16:35:59.337269 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1089348, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1297605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 16:35:59.337278 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1089379, 'dev': 182, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748706249.1417606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 16:35:59.337285 | orchestrator | 2025-05-31 16:35:59.337294 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-05-31 16:35:59.337302 | orchestrator | Saturday 31 May 2025 16:33:00 +0000 (0:00:33.294) 0:00:59.550 ********** 2025-05-31 16:35:59.337310 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-31 16:35:59.337318 | orchestrator | 2025-05-31 16:35:59.337326 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-05-31 16:35:59.337334 | orchestrator | Saturday 31 May 2025 16:33:00 +0000 (0:00:00.427) 0:00:59.977 ********** 2025-05-31 16:35:59.337342 | orchestrator | [WARNING]: Skipped 2025-05-31 16:35:59.337350 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-31 16:35:59.337359 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-05-31 16:35:59.337368 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-31 16:35:59.337377 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-05-31 16:35:59.337386 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-31 16:35:59.337395 | orchestrator | [WARNING]: Skipped 2025-05-31 16:35:59.337404 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-31 16:35:59.337412 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-05-31 16:35:59.337421 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-31 16:35:59.337430 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-05-31 16:35:59.337443 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-31 16:35:59.337452 | orchestrator | [WARNING]: Skipped 2025-05-31 16:35:59.337461 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-31 16:35:59.337470 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-05-31 16:35:59.337478 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-31 16:35:59.337487 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-05-31 16:35:59.337496 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-31 16:35:59.337505 | orchestrator | [WARNING]: Skipped 2025-05-31 16:35:59.337513 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-31 16:35:59.337522 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-05-31 16:35:59.337531 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-31 16:35:59.337540 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-05-31 16:35:59.337549 | orchestrator | [WARNING]: Skipped 2025-05-31 16:35:59.337557 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-31 16:35:59.337566 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-05-31 16:35:59.337574 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-31 16:35:59.337583 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-05-31 16:35:59.337591 | orchestrator | [WARNING]: Skipped 2025-05-31 16:35:59.337601 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-31 16:35:59.337609 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-05-31 16:35:59.337616 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-31 16:35:59.337624 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-05-31 16:35:59.337632 | orchestrator | [WARNING]: Skipped 2025-05-31 16:35:59.337640 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-31 16:35:59.337648 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-05-31 16:35:59.337655 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-31 16:35:59.337663 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-05-31 16:35:59.337671 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-31 16:35:59.337679 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-31 16:35:59.337687 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-31 16:35:59.337694 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-31 16:35:59.337702 | orchestrator | 2025-05-31 16:35:59.337710 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-05-31 16:35:59.337717 | orchestrator | Saturday 31 May 2025 16:33:02 +0000 (0:00:01.494) 0:01:01.472 ********** 2025-05-31 16:35:59.337725 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-31 16:35:59.337741 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-31 16:35:59.337749 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:35:59.337757 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:35:59.337765 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-31 16:35:59.337773 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:35:59.337780 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-31 16:35:59.337788 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:35:59.337796 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-31 16:35:59.337804 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:35:59.337812 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-31 16:35:59.337843 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:35:59.337852 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-05-31 16:35:59.337860 | orchestrator | 2025-05-31 16:35:59.337868 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-05-31 16:35:59.337875 | orchestrator | Saturday 31 May 2025 16:33:17 +0000 (0:00:15.333) 0:01:16.806 ********** 2025-05-31 16:35:59.337883 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-31 16:35:59.337890 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:35:59.337898 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-31 16:35:59.337906 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:35:59.337913 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-31 16:35:59.337921 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:35:59.337928 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-31 16:35:59.337936 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:35:59.337944 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-31 16:35:59.337951 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:35:59.337959 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-31 16:35:59.337966 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:35:59.337974 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-05-31 16:35:59.337981 | orchestrator | 2025-05-31 16:35:59.337989 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-05-31 16:35:59.337997 | orchestrator | Saturday 31 May 2025 16:33:23 +0000 (0:00:06.078) 0:01:22.885 ********** 2025-05-31 16:35:59.338004 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-31 16:35:59.338012 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:35:59.338055 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-31 16:35:59.338063 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:35:59.338071 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-31 16:35:59.338080 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:35:59.338087 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-31 16:35:59.338095 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:35:59.338103 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-31 16:35:59.338110 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:35:59.338118 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-31 16:35:59.338126 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:35:59.338133 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-05-31 16:35:59.338141 | orchestrator | 2025-05-31 16:35:59.338149 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-05-31 16:35:59.338157 | orchestrator | Saturday 31 May 2025 16:33:26 +0000 (0:00:03.053) 0:01:25.938 ********** 2025-05-31 16:35:59.338164 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-31 16:35:59.338172 | orchestrator | 2025-05-31 16:35:59.338190 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-05-31 16:35:59.338203 | orchestrator | Saturday 31 May 2025 16:33:26 +0000 (0:00:00.337) 0:01:26.275 ********** 2025-05-31 16:35:59.338216 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:35:59.338227 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:35:59.338240 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:35:59.338253 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:35:59.338264 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:35:59.338277 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:35:59.338290 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:35:59.338303 | orchestrator | 2025-05-31 16:35:59.338316 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-05-31 16:35:59.338341 | orchestrator | Saturday 31 May 2025 16:33:27 +0000 (0:00:00.787) 0:01:27.063 ********** 2025-05-31 16:35:59.338355 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:35:59.338364 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:35:59.338372 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:35:59.338379 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:35:59.338387 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:35:59.338395 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:35:59.338402 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:35:59.338410 | orchestrator | 2025-05-31 16:35:59.338418 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-05-31 16:35:59.338426 | orchestrator | Saturday 31 May 2025 16:33:31 +0000 (0:00:03.823) 0:01:30.886 ********** 2025-05-31 16:35:59.338433 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-31 16:35:59.338441 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:35:59.338449 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-31 16:35:59.338457 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:35:59.338464 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-31 16:35:59.338472 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:35:59.338480 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-31 16:35:59.338488 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:35:59.338496 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-31 16:35:59.338503 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:35:59.338511 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-31 16:35:59.338520 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:35:59.338534 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-31 16:35:59.338547 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:35:59.338559 | orchestrator | 2025-05-31 16:35:59.338573 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-05-31 16:35:59.338587 | orchestrator | Saturday 31 May 2025 16:33:34 +0000 (0:00:02.592) 0:01:33.479 ********** 2025-05-31 16:35:59.338600 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-31 16:35:59.338614 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:35:59.338623 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-31 16:35:59.338631 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:35:59.338638 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-31 16:35:59.338646 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:35:59.338654 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-31 16:35:59.338661 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:35:59.338676 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-31 16:35:59.338684 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:35:59.338692 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-31 16:35:59.338700 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:35:59.338707 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-05-31 16:35:59.338715 | orchestrator | 2025-05-31 16:35:59.338723 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-05-31 16:35:59.338730 | orchestrator | Saturday 31 May 2025 16:33:37 +0000 (0:00:03.210) 0:01:36.690 ********** 2025-05-31 16:35:59.338738 | orchestrator | [WARNING]: Skipped 2025-05-31 16:35:59.338746 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-05-31 16:35:59.338754 | orchestrator | due to this access issue: 2025-05-31 16:35:59.338761 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-05-31 16:35:59.338769 | orchestrator | not a directory 2025-05-31 16:35:59.338777 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-31 16:35:59.338785 | orchestrator | 2025-05-31 16:35:59.338792 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-05-31 16:35:59.338800 | orchestrator | Saturday 31 May 2025 16:33:38 +0000 (0:00:01.612) 0:01:38.302 ********** 2025-05-31 16:35:59.338808 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:35:59.338816 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:35:59.338876 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:35:59.338886 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:35:59.338893 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:35:59.338901 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:35:59.338909 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:35:59.338916 | orchestrator | 2025-05-31 16:35:59.338924 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-05-31 16:35:59.338932 | orchestrator | Saturday 31 May 2025 16:33:39 +0000 (0:00:00.990) 0:01:39.292 ********** 2025-05-31 16:35:59.338939 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:35:59.338947 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:35:59.338955 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:35:59.338962 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:35:59.338970 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:35:59.338977 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:35:59.338985 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:35:59.338993 | orchestrator | 2025-05-31 16:35:59.338999 | orchestrator | TASK [prometheus : Copying over prometheus msteams config file] **************** 2025-05-31 16:35:59.339015 | orchestrator | Saturday 31 May 2025 16:33:40 +0000 (0:00:00.875) 0:01:40.167 ********** 2025-05-31 16:35:59.339022 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-31 16:35:59.339029 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:35:59.339036 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-31 16:35:59.339042 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:35:59.339049 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-31 16:35:59.339055 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:35:59.339062 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-31 16:35:59.339068 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:35:59.339075 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-31 16:35:59.339081 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:35:59.339092 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-31 16:35:59.339099 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:35:59.339105 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-31 16:35:59.339112 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:35:59.339118 | orchestrator | 2025-05-31 16:35:59.339125 | orchestrator | TASK [prometheus : Copying over prometheus msteams template file] ************** 2025-05-31 16:35:59.339131 | orchestrator | Saturday 31 May 2025 16:33:43 +0000 (0:00:02.732) 0:01:42.900 ********** 2025-05-31 16:35:59.339138 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-31 16:35:59.339144 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:35:59.339151 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-31 16:35:59.339157 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:35:59.339164 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-31 16:35:59.339170 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:35:59.339177 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-31 16:35:59.339183 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:35:59.339190 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-31 16:35:59.339196 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:35:59.339203 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-31 16:35:59.339210 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:35:59.339216 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-31 16:35:59.339223 | orchestrator | skipping: [testbed-manager] 2025-05-31 16:35:59.339229 | orchestrator | 2025-05-31 16:35:59.339236 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-05-31 16:35:59.339242 | orchestrator | Saturday 31 May 2025 16:33:46 +0000 (0:00:03.015) 0:01:45.916 ********** 2025-05-31 16:35:59.339250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-31 16:35:59.339258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-31 16:35:59.339273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-31 16:35:59.339285 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-31 16:35:59.339292 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-31 16:35:59.339299 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-31 16:35:59.339306 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-31 16:35:59.339323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 16:35:59.339342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 16:35:59.339355 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 16:35:59.339368 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.339380 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.339392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 16:35:59.339404 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 16:35:59.339416 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.339454 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.339462 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 16:35:59.339469 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.339476 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.339483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:35:59.339490 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 16:35:59.339497 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 16:35:59.339508 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.339522 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.339529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:35:59.339537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:35:59.339544 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-31 16:35:59.339553 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-31 16:35:59.339560 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.339579 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 16:35:59.339587 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-31 16:35:59.339594 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-31 16:35:59.339601 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.339608 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.339622 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 16:35:59.339637 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-31 16:35:59.339645 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-31 16:35:59.339652 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.339659 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.339665 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 16:35:59.339684 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-31 16:35:59.339692 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-31 16:35:59.339699 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.339706 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.339713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:35:59.339720 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:35:59.339732 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-31 16:35:59.339744 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.339751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:35:59.339758 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-31 16:35:59.339765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:35:59.339772 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.339779 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-31 16:35:59.339790 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.339797 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-31 16:35:59.339857 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.339867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 16:35:59.339874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-31 16:35:59.339882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-31 16:35:59.339893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 16:35:59.339908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-31 16:35:59.339916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-31 16:35:59.339923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 16:35:59.339930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-31 16:35:59.339941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-31 16:35:59.339948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:35:59.339962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.339970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-31 16:35:59.339976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.339983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:35:59.339990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.340001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-31 16:35:59.340008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.340023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 16:35:59.340030 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.340037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-31 16:35:59.340044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 16:35:59.340051 | orchestrator | 2025-05-31 16:35:59.340058 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-05-31 16:35:59.340064 | orchestrator | Saturday 31 May 2025 16:33:51 +0000 (0:00:05.163) 0:01:51.079 ********** 2025-05-31 16:35:59.340075 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-05-31 16:35:59.340082 | orchestrator | 2025-05-31 16:35:59.340089 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-31 16:35:59.340096 | orchestrator | Saturday 31 May 2025 16:33:54 +0000 (0:00:02.369) 0:01:53.448 ********** 2025-05-31 16:35:59.340102 | orchestrator | 2025-05-31 16:35:59.340109 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-31 16:35:59.340115 | orchestrator | Saturday 31 May 2025 16:33:54 +0000 (0:00:00.051) 0:01:53.500 ********** 2025-05-31 16:35:59.340122 | orchestrator | 2025-05-31 16:35:59.340128 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-31 16:35:59.340135 | orchestrator | Saturday 31 May 2025 16:33:54 +0000 (0:00:00.158) 0:01:53.659 ********** 2025-05-31 16:35:59.340141 | orchestrator | 2025-05-31 16:35:59.340148 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-31 16:35:59.340154 | orchestrator | Saturday 31 May 2025 16:33:54 +0000 (0:00:00.047) 0:01:53.707 ********** 2025-05-31 16:35:59.340161 | orchestrator | 2025-05-31 16:35:59.340167 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-31 16:35:59.340174 | orchestrator | Saturday 31 May 2025 16:33:54 +0000 (0:00:00.050) 0:01:53.757 ********** 2025-05-31 16:35:59.340180 | orchestrator | 2025-05-31 16:35:59.340187 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-31 16:35:59.340193 | orchestrator | Saturday 31 May 2025 16:33:54 +0000 (0:00:00.049) 0:01:53.807 ********** 2025-05-31 16:35:59.340200 | orchestrator | 2025-05-31 16:35:59.340206 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-31 16:35:59.340213 | orchestrator | Saturday 31 May 2025 16:33:54 +0000 (0:00:00.240) 0:01:54.047 ********** 2025-05-31 16:35:59.340219 | orchestrator | 2025-05-31 16:35:59.340226 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-05-31 16:35:59.340232 | orchestrator | Saturday 31 May 2025 16:33:54 +0000 (0:00:00.068) 0:01:54.115 ********** 2025-05-31 16:35:59.340239 | orchestrator | changed: [testbed-manager] 2025-05-31 16:35:59.340245 | orchestrator | 2025-05-31 16:35:59.340252 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-05-31 16:35:59.340258 | orchestrator | Saturday 31 May 2025 16:34:10 +0000 (0:00:15.577) 0:02:09.693 ********** 2025-05-31 16:35:59.340265 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:35:59.340271 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:35:59.340278 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:35:59.340284 | orchestrator | changed: [testbed-manager] 2025-05-31 16:35:59.340291 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:35:59.340297 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:35:59.340304 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:35:59.340310 | orchestrator | 2025-05-31 16:35:59.340317 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-05-31 16:35:59.340323 | orchestrator | Saturday 31 May 2025 16:34:28 +0000 (0:00:17.949) 0:02:27.642 ********** 2025-05-31 16:35:59.340330 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:35:59.340336 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:35:59.340343 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:35:59.340349 | orchestrator | 2025-05-31 16:35:59.340356 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-05-31 16:35:59.340372 | orchestrator | Saturday 31 May 2025 16:34:41 +0000 (0:00:12.918) 0:02:40.561 ********** 2025-05-31 16:35:59.340379 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:35:59.340385 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:35:59.340392 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:35:59.340399 | orchestrator | 2025-05-31 16:35:59.340405 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-05-31 16:35:59.340412 | orchestrator | Saturday 31 May 2025 16:34:55 +0000 (0:00:13.888) 0:02:54.449 ********** 2025-05-31 16:35:59.340418 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:35:59.340430 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:35:59.340436 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:35:59.340443 | orchestrator | changed: [testbed-manager] 2025-05-31 16:35:59.340449 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:35:59.340456 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:35:59.340462 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:35:59.340469 | orchestrator | 2025-05-31 16:35:59.340475 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-05-31 16:35:59.340482 | orchestrator | Saturday 31 May 2025 16:35:15 +0000 (0:00:20.423) 0:03:14.872 ********** 2025-05-31 16:35:59.340488 | orchestrator | changed: [testbed-manager] 2025-05-31 16:35:59.340495 | orchestrator | 2025-05-31 16:35:59.340502 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-05-31 16:35:59.340508 | orchestrator | Saturday 31 May 2025 16:35:23 +0000 (0:00:08.327) 0:03:23.199 ********** 2025-05-31 16:35:59.340515 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:35:59.340521 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:35:59.340528 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:35:59.340534 | orchestrator | 2025-05-31 16:35:59.340541 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-05-31 16:35:59.340547 | orchestrator | Saturday 31 May 2025 16:35:36 +0000 (0:00:12.535) 0:03:35.735 ********** 2025-05-31 16:35:59.340554 | orchestrator | changed: [testbed-manager] 2025-05-31 16:35:59.340560 | orchestrator | 2025-05-31 16:35:59.340567 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-05-31 16:35:59.340573 | orchestrator | Saturday 31 May 2025 16:35:44 +0000 (0:00:07.864) 0:03:43.599 ********** 2025-05-31 16:35:59.340580 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:35:59.340586 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:35:59.340593 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:35:59.340599 | orchestrator | 2025-05-31 16:35:59.340606 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:35:59.340613 | orchestrator | testbed-manager : ok=24  changed=15  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-31 16:35:59.340620 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-31 16:35:59.340627 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-31 16:35:59.340633 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-31 16:35:59.340640 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-31 16:35:59.340647 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-31 16:35:59.340653 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-31 16:35:59.340660 | orchestrator | 2025-05-31 16:35:59.340666 | orchestrator | 2025-05-31 16:35:59.340673 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 16:35:59.340680 | orchestrator | Saturday 31 May 2025 16:35:57 +0000 (0:00:13.580) 0:03:57.179 ********** 2025-05-31 16:35:59.340686 | orchestrator | =============================================================================== 2025-05-31 16:35:59.340693 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 33.29s 2025-05-31 16:35:59.340699 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 20.42s 2025-05-31 16:35:59.340706 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 17.95s 2025-05-31 16:35:59.340717 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 15.58s 2025-05-31 16:35:59.340723 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 15.33s 2025-05-31 16:35:59.340730 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 13.89s 2025-05-31 16:35:59.340736 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 13.58s 2025-05-31 16:35:59.340743 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 12.92s 2025-05-31 16:35:59.340749 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 12.54s 2025-05-31 16:35:59.340756 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.33s 2025-05-31 16:35:59.340762 | orchestrator | prometheus : Copying over config.json files ----------------------------- 7.98s 2025-05-31 16:35:59.340769 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 7.86s 2025-05-31 16:35:59.340775 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 6.08s 2025-05-31 16:35:59.340788 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.42s 2025-05-31 16:35:59.340795 | orchestrator | prometheus : Check prometheus containers -------------------------------- 5.16s 2025-05-31 16:35:59.340802 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 3.82s 2025-05-31 16:35:59.340808 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 3.21s 2025-05-31 16:35:59.340815 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.16s 2025-05-31 16:35:59.340836 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 3.14s 2025-05-31 16:35:59.340846 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 3.05s 2025-05-31 16:35:59.340853 | orchestrator | 2025-05-31 16:35:59 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:35:59.340860 | orchestrator | 2025-05-31 16:35:59 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:35:59.340866 | orchestrator | 2025-05-31 16:35:59 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:35:59.340873 | orchestrator | 2025-05-31 16:35:59 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:35:59.340880 | orchestrator | 2025-05-31 16:35:59 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:36:02.388870 | orchestrator | 2025-05-31 16:36:02 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:36:02.390600 | orchestrator | 2025-05-31 16:36:02 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:36:02.392934 | orchestrator | 2025-05-31 16:36:02 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:36:02.395110 | orchestrator | 2025-05-31 16:36:02 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:36:02.397276 | orchestrator | 2025-05-31 16:36:02 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:36:02.397298 | orchestrator | 2025-05-31 16:36:02 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:36:05.445933 | orchestrator | 2025-05-31 16:36:05 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:36:05.446632 | orchestrator | 2025-05-31 16:36:05 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:36:05.447730 | orchestrator | 2025-05-31 16:36:05 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:36:05.449535 | orchestrator | 2025-05-31 16:36:05 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:36:05.451044 | orchestrator | 2025-05-31 16:36:05 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:36:05.451161 | orchestrator | 2025-05-31 16:36:05 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:36:08.518982 | orchestrator | 2025-05-31 16:36:08 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:36:08.520174 | orchestrator | 2025-05-31 16:36:08 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:36:08.521858 | orchestrator | 2025-05-31 16:36:08 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:36:08.523381 | orchestrator | 2025-05-31 16:36:08 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:36:08.527924 | orchestrator | 2025-05-31 16:36:08 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:36:08.527962 | orchestrator | 2025-05-31 16:36:08 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:36:11.569533 | orchestrator | 2025-05-31 16:36:11 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:36:11.570762 | orchestrator | 2025-05-31 16:36:11 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:36:11.572008 | orchestrator | 2025-05-31 16:36:11 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:36:11.573901 | orchestrator | 2025-05-31 16:36:11 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:36:11.575050 | orchestrator | 2025-05-31 16:36:11 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:36:11.575086 | orchestrator | 2025-05-31 16:36:11 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:36:14.635585 | orchestrator | 2025-05-31 16:36:14 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:36:14.635889 | orchestrator | 2025-05-31 16:36:14 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:36:14.637132 | orchestrator | 2025-05-31 16:36:14 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:36:14.637166 | orchestrator | 2025-05-31 16:36:14 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:36:14.637598 | orchestrator | 2025-05-31 16:36:14 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:36:14.637625 | orchestrator | 2025-05-31 16:36:14 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:36:17.675314 | orchestrator | 2025-05-31 16:36:17 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:36:17.675666 | orchestrator | 2025-05-31 16:36:17 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:36:17.676236 | orchestrator | 2025-05-31 16:36:17 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:36:17.676770 | orchestrator | 2025-05-31 16:36:17 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:36:17.677196 | orchestrator | 2025-05-31 16:36:17 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:36:17.677222 | orchestrator | 2025-05-31 16:36:17 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:36:20.701217 | orchestrator | 2025-05-31 16:36:20 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:36:20.701304 | orchestrator | 2025-05-31 16:36:20 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:36:20.701328 | orchestrator | 2025-05-31 16:36:20 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:36:20.702055 | orchestrator | 2025-05-31 16:36:20 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:36:20.702668 | orchestrator | 2025-05-31 16:36:20 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:36:20.702678 | orchestrator | 2025-05-31 16:36:20 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:36:23.746138 | orchestrator | 2025-05-31 16:36:23 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:36:23.748272 | orchestrator | 2025-05-31 16:36:23 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:36:23.748317 | orchestrator | 2025-05-31 16:36:23 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:36:23.749669 | orchestrator | 2025-05-31 16:36:23 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:36:23.751003 | orchestrator | 2025-05-31 16:36:23 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:36:23.751127 | orchestrator | 2025-05-31 16:36:23 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:36:26.792425 | orchestrator | 2025-05-31 16:36:26 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:36:26.793634 | orchestrator | 2025-05-31 16:36:26 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:36:26.795299 | orchestrator | 2025-05-31 16:36:26 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:36:26.796958 | orchestrator | 2025-05-31 16:36:26 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:36:26.798325 | orchestrator | 2025-05-31 16:36:26 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:36:26.798358 | orchestrator | 2025-05-31 16:36:26 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:36:29.846513 | orchestrator | 2025-05-31 16:36:29 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:36:29.847859 | orchestrator | 2025-05-31 16:36:29 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:36:29.849383 | orchestrator | 2025-05-31 16:36:29 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:36:29.851053 | orchestrator | 2025-05-31 16:36:29 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:36:29.852626 | orchestrator | 2025-05-31 16:36:29 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:36:29.852828 | orchestrator | 2025-05-31 16:36:29 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:36:32.909575 | orchestrator | 2025-05-31 16:36:32 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:36:32.912166 | orchestrator | 2025-05-31 16:36:32 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:36:32.912620 | orchestrator | 2025-05-31 16:36:32 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:36:32.914500 | orchestrator | 2025-05-31 16:36:32 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:36:32.915616 | orchestrator | 2025-05-31 16:36:32 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:36:32.915883 | orchestrator | 2025-05-31 16:36:32 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:36:35.958222 | orchestrator | 2025-05-31 16:36:35 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:36:35.959150 | orchestrator | 2025-05-31 16:36:35 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:36:35.960467 | orchestrator | 2025-05-31 16:36:35 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:36:35.961995 | orchestrator | 2025-05-31 16:36:35 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:36:35.963136 | orchestrator | 2025-05-31 16:36:35 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:36:35.963165 | orchestrator | 2025-05-31 16:36:35 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:36:39.011226 | orchestrator | 2025-05-31 16:36:39 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:36:39.012204 | orchestrator | 2025-05-31 16:36:39 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:36:39.014081 | orchestrator | 2025-05-31 16:36:39 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:36:39.016026 | orchestrator | 2025-05-31 16:36:39 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:36:39.018290 | orchestrator | 2025-05-31 16:36:39 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:36:39.018328 | orchestrator | 2025-05-31 16:36:39 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:36:42.062078 | orchestrator | 2025-05-31 16:36:42 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:36:42.063424 | orchestrator | 2025-05-31 16:36:42 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:36:42.065033 | orchestrator | 2025-05-31 16:36:42 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:36:42.067212 | orchestrator | 2025-05-31 16:36:42 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:36:42.069084 | orchestrator | 2025-05-31 16:36:42 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:36:42.069106 | orchestrator | 2025-05-31 16:36:42 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:36:45.121735 | orchestrator | 2025-05-31 16:36:45 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:36:45.122701 | orchestrator | 2025-05-31 16:36:45 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:36:45.123866 | orchestrator | 2025-05-31 16:36:45 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:36:45.125415 | orchestrator | 2025-05-31 16:36:45 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:36:45.127732 | orchestrator | 2025-05-31 16:36:45 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state STARTED 2025-05-31 16:36:45.127841 | orchestrator | 2025-05-31 16:36:45 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:36:48.169678 | orchestrator | 2025-05-31 16:36:48 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:36:48.170896 | orchestrator | 2025-05-31 16:36:48 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:36:48.174011 | orchestrator | 2025-05-31 16:36:48 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:36:48.175203 | orchestrator | 2025-05-31 16:36:48 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:36:48.177138 | orchestrator | 2025-05-31 16:36:48.177166 | orchestrator | 2025-05-31 16:36:48 | INFO  | Task 18df0807-c69f-44ba-88d3-c63b5f570227 is in state SUCCESS 2025-05-31 16:36:48.178993 | orchestrator | 2025-05-31 16:36:48.179032 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-31 16:36:48.179045 | orchestrator | 2025-05-31 16:36:48.179071 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-31 16:36:48.179083 | orchestrator | Saturday 31 May 2025 16:33:30 +0000 (0:00:00.360) 0:00:00.360 ********** 2025-05-31 16:36:48.179095 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:36:48.179108 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:36:48.179118 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:36:48.179129 | orchestrator | 2025-05-31 16:36:48.179140 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-31 16:36:48.179150 | orchestrator | Saturday 31 May 2025 16:33:30 +0000 (0:00:00.316) 0:00:00.677 ********** 2025-05-31 16:36:48.179161 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-05-31 16:36:48.179172 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-05-31 16:36:48.179183 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-05-31 16:36:48.179193 | orchestrator | 2025-05-31 16:36:48.179203 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-05-31 16:36:48.179214 | orchestrator | 2025-05-31 16:36:48.179224 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-31 16:36:48.179235 | orchestrator | Saturday 31 May 2025 16:33:31 +0000 (0:00:00.292) 0:00:00.969 ********** 2025-05-31 16:36:48.179245 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:36:48.179256 | orchestrator | 2025-05-31 16:36:48.179267 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-05-31 16:36:48.179298 | orchestrator | Saturday 31 May 2025 16:33:32 +0000 (0:00:00.795) 0:00:01.764 ********** 2025-05-31 16:36:48.179309 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-05-31 16:36:48.179320 | orchestrator | 2025-05-31 16:36:48.179331 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-05-31 16:36:48.179342 | orchestrator | Saturday 31 May 2025 16:33:35 +0000 (0:00:03.638) 0:00:05.403 ********** 2025-05-31 16:36:48.179353 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-05-31 16:36:48.179364 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-05-31 16:36:48.179375 | orchestrator | 2025-05-31 16:36:48.179386 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-05-31 16:36:48.179397 | orchestrator | Saturday 31 May 2025 16:33:42 +0000 (0:00:07.087) 0:00:12.490 ********** 2025-05-31 16:36:48.179435 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-31 16:36:48.179448 | orchestrator | 2025-05-31 16:36:48.179458 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-05-31 16:36:48.179469 | orchestrator | Saturday 31 May 2025 16:33:46 +0000 (0:00:03.665) 0:00:16.156 ********** 2025-05-31 16:36:48.179480 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-31 16:36:48.179491 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-05-31 16:36:48.179502 | orchestrator | 2025-05-31 16:36:48.179513 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-05-31 16:36:48.179524 | orchestrator | Saturday 31 May 2025 16:33:50 +0000 (0:00:04.140) 0:00:20.296 ********** 2025-05-31 16:36:48.179534 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-31 16:36:48.179545 | orchestrator | 2025-05-31 16:36:48.179556 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-05-31 16:36:48.179569 | orchestrator | Saturday 31 May 2025 16:33:53 +0000 (0:00:03.386) 0:00:23.682 ********** 2025-05-31 16:36:48.179581 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-05-31 16:36:48.179592 | orchestrator | 2025-05-31 16:36:48.179605 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-05-31 16:36:48.179630 | orchestrator | Saturday 31 May 2025 16:33:58 +0000 (0:00:04.433) 0:00:28.116 ********** 2025-05-31 16:36:48.179671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-31 16:36:48.179691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-31 16:36:48.179706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-31 16:36:48.179742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-31 16:36:48.179758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-31 16:36:48.179828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-31 16:36:48.179844 | orchestrator | 2025-05-31 16:36:48.179856 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-31 16:36:48.179868 | orchestrator | Saturday 31 May 2025 16:34:02 +0000 (0:00:03.770) 0:00:31.887 ********** 2025-05-31 16:36:48.179880 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:36:48.179892 | orchestrator | 2025-05-31 16:36:48.179904 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-05-31 16:36:48.179915 | orchestrator | Saturday 31 May 2025 16:34:02 +0000 (0:00:00.450) 0:00:32.337 ********** 2025-05-31 16:36:48.179926 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:36:48.179936 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:36:48.179947 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:36:48.179958 | orchestrator | 2025-05-31 16:36:48.179968 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-05-31 16:36:48.179979 | orchestrator | Saturday 31 May 2025 16:34:08 +0000 (0:00:06.140) 0:00:38.477 ********** 2025-05-31 16:36:48.179989 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-31 16:36:48.180005 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-31 16:36:48.180015 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-31 16:36:48.180026 | orchestrator | 2025-05-31 16:36:48.180036 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-05-31 16:36:48.180047 | orchestrator | Saturday 31 May 2025 16:34:10 +0000 (0:00:01.759) 0:00:40.237 ********** 2025-05-31 16:36:48.180057 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-31 16:36:48.180068 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-31 16:36:48.180078 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-31 16:36:48.180089 | orchestrator | 2025-05-31 16:36:48.180099 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-05-31 16:36:48.180110 | orchestrator | Saturday 31 May 2025 16:34:11 +0000 (0:00:01.382) 0:00:41.620 ********** 2025-05-31 16:36:48.180120 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:36:48.180130 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:36:48.180141 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:36:48.180151 | orchestrator | 2025-05-31 16:36:48.180161 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-05-31 16:36:48.180172 | orchestrator | Saturday 31 May 2025 16:34:13 +0000 (0:00:01.578) 0:00:43.198 ********** 2025-05-31 16:36:48.180182 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:36:48.180192 | orchestrator | 2025-05-31 16:36:48.180203 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-05-31 16:36:48.180213 | orchestrator | Saturday 31 May 2025 16:34:13 +0000 (0:00:00.201) 0:00:43.399 ********** 2025-05-31 16:36:48.180224 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:36:48.180234 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:36:48.180245 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:36:48.180255 | orchestrator | 2025-05-31 16:36:48.180266 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-31 16:36:48.180276 | orchestrator | Saturday 31 May 2025 16:34:14 +0000 (0:00:00.751) 0:00:44.150 ********** 2025-05-31 16:36:48.180287 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:36:48.180297 | orchestrator | 2025-05-31 16:36:48.180308 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-05-31 16:36:48.180318 | orchestrator | Saturday 31 May 2025 16:34:15 +0000 (0:00:01.489) 0:00:45.640 ********** 2025-05-31 16:36:48.180343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-31 16:36:48.180364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-31 16:36:48.180390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-31 16:36:48.180410 | orchestrator | 2025-05-31 16:36:48.180421 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-05-31 16:36:48.180432 | orchestrator | Saturday 31 May 2025 16:34:22 +0000 (0:00:07.015) 0:00:52.656 ********** 2025-05-31 16:36:48.180444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-31 16:36:48.180456 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:36:48.180480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-31 16:36:48.180493 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:36:48.180505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-31 16:36:48.180529 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:36:48.180540 | orchestrator | 2025-05-31 16:36:48.180550 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-05-31 16:36:48.180561 | orchestrator | Saturday 31 May 2025 16:34:26 +0000 (0:00:03.736) 0:00:56.392 ********** 2025-05-31 16:36:48.180638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-31 16:36:48.180654 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:36:48.180666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-31 16:36:48.180685 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:36:48.180696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-31 16:36:48.180708 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:36:48.180720 | orchestrator | 2025-05-31 16:36:48.180731 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-05-31 16:36:48.180741 | orchestrator | Saturday 31 May 2025 16:34:31 +0000 (0:00:05.041) 0:01:01.434 ********** 2025-05-31 16:36:48.180752 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:36:48.180763 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:36:48.180775 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:36:48.180830 | orchestrator | 2025-05-31 16:36:48.181100 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-05-31 16:36:48.181118 | orchestrator | Saturday 31 May 2025 16:34:35 +0000 (0:00:03.822) 0:01:05.257 ********** 2025-05-31 16:36:48.181137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-31 16:36:48.181160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-31 16:36:48.181188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-31 16:36:48.181208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-31 16:36:48.181233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-31 16:36:48.181253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-31 16:36:48.181265 | orchestrator | 2025-05-31 16:36:48.181275 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-05-31 16:36:48.181286 | orchestrator | Saturday 31 May 2025 16:34:39 +0000 (0:00:04.403) 0:01:09.660 ********** 2025-05-31 16:36:48.181297 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:36:48.181308 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:36:48.181318 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:36:48.181329 | orchestrator | 2025-05-31 16:36:48.181339 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-05-31 16:36:48.181350 | orchestrator | Saturday 31 May 2025 16:34:52 +0000 (0:00:12.767) 0:01:22.427 ********** 2025-05-31 16:36:48.181360 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:36:48.181371 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:36:48.181381 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:36:48.181392 | orchestrator | 2025-05-31 16:36:48.181403 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-05-31 16:36:48.181413 | orchestrator | Saturday 31 May 2025 16:35:08 +0000 (0:00:16.150) 0:01:38.578 ********** 2025-05-31 16:36:48.181424 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:36:48.181434 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:36:48.181445 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:36:48.181455 | orchestrator | 2025-05-31 16:36:48.181466 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-05-31 16:36:48.181482 | orchestrator | Saturday 31 May 2025 16:35:17 +0000 (0:00:08.845) 0:01:47.424 ********** 2025-05-31 16:36:48.181493 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:36:48.181503 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:36:48.181514 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:36:48.181525 | orchestrator | 2025-05-31 16:36:48.181535 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-05-31 16:36:48.181546 | orchestrator | Saturday 31 May 2025 16:35:22 +0000 (0:00:05.068) 0:01:52.492 ********** 2025-05-31 16:36:48.181556 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:36:48.181572 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:36:48.181584 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:36:48.181594 | orchestrator | 2025-05-31 16:36:48.181605 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-05-31 16:36:48.181620 | orchestrator | Saturday 31 May 2025 16:35:30 +0000 (0:00:07.525) 0:02:00.018 ********** 2025-05-31 16:36:48.181631 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:36:48.181642 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:36:48.181652 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:36:48.181663 | orchestrator | 2025-05-31 16:36:48.181673 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-05-31 16:36:48.181684 | orchestrator | Saturday 31 May 2025 16:35:30 +0000 (0:00:00.251) 0:02:00.269 ********** 2025-05-31 16:36:48.181695 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-31 16:36:48.181705 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:36:48.181716 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-31 16:36:48.181727 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:36:48.181737 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-31 16:36:48.181748 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:36:48.181758 | orchestrator | 2025-05-31 16:36:48.181769 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-05-31 16:36:48.181779 | orchestrator | Saturday 31 May 2025 16:35:33 +0000 (0:00:02.876) 0:02:03.146 ********** 2025-05-31 16:36:48.181844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-31 16:36:48.181879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-31 16:36:48.181893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-31 16:36:48.181912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-31 16:36:48.181935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-31 16:36:48.181948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-31 16:36:48.182076 | orchestrator | 2025-05-31 16:36:48.182091 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-31 16:36:48.182102 | orchestrator | Saturday 31 May 2025 16:35:37 +0000 (0:00:04.045) 0:02:07.191 ********** 2025-05-31 16:36:48.182113 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:36:48.182124 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:36:48.182135 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:36:48.182145 | orchestrator | 2025-05-31 16:36:48.182162 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-05-31 16:36:48.182173 | orchestrator | Saturday 31 May 2025 16:35:37 +0000 (0:00:00.394) 0:02:07.586 ********** 2025-05-31 16:36:48.182184 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:36:48.182195 | orchestrator | 2025-05-31 16:36:48.182211 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-05-31 16:36:48.182221 | orchestrator | Saturday 31 May 2025 16:35:40 +0000 (0:00:02.458) 0:02:10.045 ********** 2025-05-31 16:36:48.182230 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:36:48.182240 | orchestrator | 2025-05-31 16:36:48.182250 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-05-31 16:36:48.182259 | orchestrator | Saturday 31 May 2025 16:35:43 +0000 (0:00:02.675) 0:02:12.720 ********** 2025-05-31 16:36:48.182269 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:36:48.182278 | orchestrator | 2025-05-31 16:36:48.182288 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-05-31 16:36:48.182297 | orchestrator | Saturday 31 May 2025 16:35:45 +0000 (0:00:02.562) 0:02:15.283 ********** 2025-05-31 16:36:48.182307 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:36:48.182316 | orchestrator | 2025-05-31 16:36:48.182325 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-05-31 16:36:48.182335 | orchestrator | Saturday 31 May 2025 16:36:11 +0000 (0:00:25.672) 0:02:40.955 ********** 2025-05-31 16:36:48.182344 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:36:48.182353 | orchestrator | 2025-05-31 16:36:48.182363 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-31 16:36:48.182372 | orchestrator | Saturday 31 May 2025 16:36:13 +0000 (0:00:02.287) 0:02:43.243 ********** 2025-05-31 16:36:48.182382 | orchestrator | 2025-05-31 16:36:48.182391 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-31 16:36:48.182400 | orchestrator | Saturday 31 May 2025 16:36:13 +0000 (0:00:00.070) 0:02:43.313 ********** 2025-05-31 16:36:48.182410 | orchestrator | 2025-05-31 16:36:48.182419 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-31 16:36:48.182428 | orchestrator | Saturday 31 May 2025 16:36:13 +0000 (0:00:00.061) 0:02:43.375 ********** 2025-05-31 16:36:48.182438 | orchestrator | 2025-05-31 16:36:48.182447 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-05-31 16:36:48.182456 | orchestrator | Saturday 31 May 2025 16:36:13 +0000 (0:00:00.215) 0:02:43.590 ********** 2025-05-31 16:36:48.182473 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:36:48.182483 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:36:48.182492 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:36:48.182501 | orchestrator | 2025-05-31 16:36:48.182511 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:36:48.182522 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-05-31 16:36:48.182533 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-31 16:36:48.182542 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-31 16:36:48.182552 | orchestrator | 2025-05-31 16:36:48.182562 | orchestrator | 2025-05-31 16:36:48.182571 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 16:36:48.182581 | orchestrator | Saturday 31 May 2025 16:36:47 +0000 (0:00:33.800) 0:03:17.391 ********** 2025-05-31 16:36:48.182590 | orchestrator | =============================================================================== 2025-05-31 16:36:48.182599 | orchestrator | glance : Restart glance-api container ---------------------------------- 33.80s 2025-05-31 16:36:48.182609 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 25.67s 2025-05-31 16:36:48.182618 | orchestrator | glance : Copying over glance-cache.conf for glance_api ----------------- 16.15s 2025-05-31 16:36:48.182627 | orchestrator | glance : Copying over glance-api.conf ---------------------------------- 12.77s 2025-05-31 16:36:48.182637 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 8.85s 2025-05-31 16:36:48.182646 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 7.53s 2025-05-31 16:36:48.182655 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.09s 2025-05-31 16:36:48.182665 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 7.02s 2025-05-31 16:36:48.182674 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 6.14s 2025-05-31 16:36:48.182683 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 5.07s 2025-05-31 16:36:48.182693 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 5.04s 2025-05-31 16:36:48.182702 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.43s 2025-05-31 16:36:48.182711 | orchestrator | glance : Copying over config.json files for services -------------------- 4.40s 2025-05-31 16:36:48.182721 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.14s 2025-05-31 16:36:48.182730 | orchestrator | glance : Check glance containers ---------------------------------------- 4.05s 2025-05-31 16:36:48.182739 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.82s 2025-05-31 16:36:48.182749 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.77s 2025-05-31 16:36:48.182758 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.74s 2025-05-31 16:36:48.182768 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.67s 2025-05-31 16:36:48.182782 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.64s 2025-05-31 16:36:48.182812 | orchestrator | 2025-05-31 16:36:48 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:36:51.234910 | orchestrator | 2025-05-31 16:36:51 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:36:51.236118 | orchestrator | 2025-05-31 16:36:51 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:36:51.237239 | orchestrator | 2025-05-31 16:36:51 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state STARTED 2025-05-31 16:36:51.239513 | orchestrator | 2025-05-31 16:36:51 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:36:51.240648 | orchestrator | 2025-05-31 16:36:51 | INFO  | Task 09735340-1644-4453-acc2-12d5242d7b4f is in state STARTED 2025-05-31 16:36:51.241148 | orchestrator | 2025-05-31 16:36:51 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:36:54.298096 | orchestrator | 2025-05-31 16:36:54 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:36:54.298217 | orchestrator | 2025-05-31 16:36:54 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:36:54.300507 | orchestrator | 2025-05-31 16:36:54 | INFO  | Task 6f5c869a-c639-4f65-89bd-9c010dc5039e is in state SUCCESS 2025-05-31 16:36:54.300547 | orchestrator | 2025-05-31 16:36:54.303349 | orchestrator | 2025-05-31 16:36:54.303416 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-31 16:36:54.303441 | orchestrator | 2025-05-31 16:36:54.303454 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-31 16:36:54.303465 | orchestrator | Saturday 31 May 2025 16:33:40 +0000 (0:00:00.290) 0:00:00.290 ********** 2025-05-31 16:36:54.303476 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:36:54.303499 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:36:54.303510 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:36:54.303521 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:36:54.303564 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:36:54.303575 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:36:54.303652 | orchestrator | 2025-05-31 16:36:54.303666 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-31 16:36:54.303677 | orchestrator | Saturday 31 May 2025 16:33:41 +0000 (0:00:00.779) 0:00:01.070 ********** 2025-05-31 16:36:54.303688 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-05-31 16:36:54.303699 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-05-31 16:36:54.303710 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-05-31 16:36:54.303852 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-05-31 16:36:54.303870 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-05-31 16:36:54.303881 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-05-31 16:36:54.303892 | orchestrator | 2025-05-31 16:36:54.303903 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-05-31 16:36:54.303914 | orchestrator | 2025-05-31 16:36:54.303925 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-31 16:36:54.303938 | orchestrator | Saturday 31 May 2025 16:33:43 +0000 (0:00:01.676) 0:00:02.746 ********** 2025-05-31 16:36:54.303951 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:36:54.303965 | orchestrator | 2025-05-31 16:36:54.303978 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-05-31 16:36:54.303990 | orchestrator | Saturday 31 May 2025 16:33:44 +0000 (0:00:01.410) 0:00:04.156 ********** 2025-05-31 16:36:54.304003 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-05-31 16:36:54.304014 | orchestrator | 2025-05-31 16:36:54.304025 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-05-31 16:36:54.304036 | orchestrator | Saturday 31 May 2025 16:33:48 +0000 (0:00:03.683) 0:00:07.840 ********** 2025-05-31 16:36:54.304047 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-05-31 16:36:54.304058 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-05-31 16:36:54.304069 | orchestrator | 2025-05-31 16:36:54.304080 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-05-31 16:36:54.304118 | orchestrator | Saturday 31 May 2025 16:33:55 +0000 (0:00:07.306) 0:00:15.147 ********** 2025-05-31 16:36:54.304130 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-31 16:36:54.304141 | orchestrator | 2025-05-31 16:36:54.304151 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-05-31 16:36:54.304162 | orchestrator | Saturday 31 May 2025 16:33:59 +0000 (0:00:03.669) 0:00:18.816 ********** 2025-05-31 16:36:54.304194 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-31 16:36:54.304206 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-05-31 16:36:54.304217 | orchestrator | 2025-05-31 16:36:54.304227 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-05-31 16:36:54.304238 | orchestrator | Saturday 31 May 2025 16:34:03 +0000 (0:00:04.152) 0:00:22.968 ********** 2025-05-31 16:36:54.304249 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-31 16:36:54.304260 | orchestrator | 2025-05-31 16:36:54.304270 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-05-31 16:36:54.304281 | orchestrator | Saturday 31 May 2025 16:34:06 +0000 (0:00:03.429) 0:00:26.398 ********** 2025-05-31 16:36:54.304318 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-05-31 16:36:54.304329 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-05-31 16:36:54.304339 | orchestrator | 2025-05-31 16:36:54.304364 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-05-31 16:36:54.304375 | orchestrator | Saturday 31 May 2025 16:34:16 +0000 (0:00:09.939) 0:00:36.338 ********** 2025-05-31 16:36:54.304408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-31 16:36:54.304424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-31 16:36:54.304436 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-31 16:36:54.304469 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.304488 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-31 16:36:54.304499 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.304519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-31 16:36:54.304531 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-31 16:36:54.304550 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.304573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-31 16:36:54.304611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.304630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-31 16:36:54.304643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.304656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.304705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.304722 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-31 16:36:54.304741 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-31 16:36:54.304753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-31 16:36:54.304764 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-31 16:36:54.304782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.304839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.304851 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-31 16:36:54.304870 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-31 16:36:54.304882 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-31 16:36:54.304900 | orchestrator | 2025-05-31 16:36:54.304911 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-31 16:36:54.304922 | orchestrator | Saturday 31 May 2025 16:34:20 +0000 (0:00:03.387) 0:00:39.726 ********** 2025-05-31 16:36:54.304933 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:36:54.304944 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:36:54.304955 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:36:54.304965 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:36:54.304976 | orchestrator | 2025-05-31 16:36:54.304987 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-05-31 16:36:54.304997 | orchestrator | Saturday 31 May 2025 16:34:21 +0000 (0:00:01.008) 0:00:40.734 ********** 2025-05-31 16:36:54.305008 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-05-31 16:36:54.305019 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-05-31 16:36:54.305029 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-05-31 16:36:54.305040 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-05-31 16:36:54.305051 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-05-31 16:36:54.305061 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-05-31 16:36:54.305072 | orchestrator | 2025-05-31 16:36:54.305082 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-05-31 16:36:54.305093 | orchestrator | Saturday 31 May 2025 16:34:23 +0000 (0:00:02.427) 0:00:43.161 ********** 2025-05-31 16:36:54.305110 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-31 16:36:54.305122 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-31 16:36:54.305141 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-31 16:36:54.305161 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-31 16:36:54.305172 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-31 16:36:54.305188 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-31 16:36:54.305201 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-31 16:36:54.305219 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-31 16:36:54.305242 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-31 16:36:54.305254 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-31 16:36:54.305271 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-31 16:36:54.305719 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-31 16:36:54.305745 | orchestrator | 2025-05-31 16:36:54.305825 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-05-31 16:36:54.305866 | orchestrator | Saturday 31 May 2025 16:34:27 +0000 (0:00:03.831) 0:00:46.993 ********** 2025-05-31 16:36:54.305885 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-31 16:36:54.305899 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-31 16:36:54.305910 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-31 16:36:54.305921 | orchestrator | 2025-05-31 16:36:54.305931 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-05-31 16:36:54.305942 | orchestrator | Saturday 31 May 2025 16:34:29 +0000 (0:00:02.170) 0:00:49.164 ********** 2025-05-31 16:36:54.305953 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-05-31 16:36:54.305964 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-05-31 16:36:54.305974 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-05-31 16:36:54.305985 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-05-31 16:36:54.305996 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-05-31 16:36:54.306007 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-05-31 16:36:54.306064 | orchestrator | 2025-05-31 16:36:54.306078 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-05-31 16:36:54.306089 | orchestrator | Saturday 31 May 2025 16:34:33 +0000 (0:00:03.346) 0:00:52.510 ********** 2025-05-31 16:36:54.306100 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-05-31 16:36:54.306111 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-05-31 16:36:54.306122 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-05-31 16:36:54.306133 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-05-31 16:36:54.306143 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-05-31 16:36:54.306154 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-05-31 16:36:54.306165 | orchestrator | 2025-05-31 16:36:54.306175 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-05-31 16:36:54.306186 | orchestrator | Saturday 31 May 2025 16:34:34 +0000 (0:00:01.197) 0:00:53.707 ********** 2025-05-31 16:36:54.306197 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:36:54.306207 | orchestrator | 2025-05-31 16:36:54.306218 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-05-31 16:36:54.306229 | orchestrator | Saturday 31 May 2025 16:34:34 +0000 (0:00:00.149) 0:00:53.857 ********** 2025-05-31 16:36:54.306239 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:36:54.306250 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:36:54.306290 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:36:54.306302 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:36:54.306313 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:36:54.306324 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:36:54.306335 | orchestrator | 2025-05-31 16:36:54.306346 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-31 16:36:54.306357 | orchestrator | Saturday 31 May 2025 16:34:35 +0000 (0:00:00.711) 0:00:54.569 ********** 2025-05-31 16:36:54.306370 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:36:54.306382 | orchestrator | 2025-05-31 16:36:54.306394 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-05-31 16:36:54.306405 | orchestrator | Saturday 31 May 2025 16:34:36 +0000 (0:00:01.088) 0:00:55.657 ********** 2025-05-31 16:36:54.306425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-31 16:36:54.306458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-31 16:36:54.306474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-31 16:36:54.306495 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-31 16:36:54.306515 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-31 16:36:54.306554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-31 16:36:54.306586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-31 16:36:54.306606 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-31 16:36:54.306618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-31 16:36:54.306629 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-31 16:36:54.306641 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-31 16:36:54.306666 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-31 16:36:54.306677 | orchestrator | 2025-05-31 16:36:54.306688 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-05-31 16:36:54.306699 | orchestrator | Saturday 31 May 2025 16:34:39 +0000 (0:00:03.318) 0:00:58.976 ********** 2025-05-31 16:36:54.306717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-31 16:36:54.306729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.306741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-31 16:36:54.306752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.306771 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:36:54.306782 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:36:54.306856 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.306878 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.306890 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:36:54.306901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-31 16:36:54.306913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.306924 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:36:54.306943 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.306959 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.306971 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:36:54.306988 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.307000 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.307011 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:36:54.307022 | orchestrator | 2025-05-31 16:36:54.307033 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-05-31 16:36:54.307044 | orchestrator | Saturday 31 May 2025 16:34:40 +0000 (0:00:01.454) 0:01:00.431 ********** 2025-05-31 16:36:54.307055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-31 16:36:54.307073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.307084 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:36:54.307101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-31 16:36:54.307120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.307132 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:36:54.307143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-31 16:36:54.307155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.307177 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:36:54.307189 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.307203 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.307213 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:36:54.307230 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.307241 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.307251 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:36:54.307261 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.307276 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.307286 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:36:54.307296 | orchestrator | 2025-05-31 16:36:54.307306 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-05-31 16:36:54.307315 | orchestrator | Saturday 31 May 2025 16:34:44 +0000 (0:00:03.469) 0:01:03.900 ********** 2025-05-31 16:36:54.307329 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-31 16:36:54.307347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-31 16:36:54.307358 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.307374 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-31 16:36:54.307384 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.307399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-31 16:36:54.307415 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-31 16:36:54.307426 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.307443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-31 16:36:54.307454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.307468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.307478 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-31 16:36:54.307496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-31 16:36:54.307507 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-31 16:36:54.307522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.307533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.307547 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-31 16:36:54.307563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-31 16:36:54.307574 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-31 16:36:54.307589 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-31 16:36:54.307600 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-31 16:36:54.307614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-31 16:36:54.307629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.307640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.307655 | orchestrator | 2025-05-31 16:36:54.307666 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-05-31 16:36:54.307684 | orchestrator | Saturday 31 May 2025 16:34:48 +0000 (0:00:04.071) 0:01:07.972 ********** 2025-05-31 16:36:54.307701 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-31 16:36:54.307718 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:36:54.307735 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-31 16:36:54.307749 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:36:54.307764 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-31 16:36:54.307780 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:36:54.307822 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-31 16:36:54.307839 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-31 16:36:54.307855 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-31 16:36:54.307872 | orchestrator | 2025-05-31 16:36:54.307883 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-05-31 16:36:54.307892 | orchestrator | Saturday 31 May 2025 16:34:51 +0000 (0:00:03.146) 0:01:11.118 ********** 2025-05-31 16:36:54.307903 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-31 16:36:54.307926 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.307948 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-31 16:36:54.307975 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.307992 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-31 16:36:54.308008 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.308033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-31 16:36:54.308044 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-31 16:36:54.308067 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-31 16:36:54.308078 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-31 16:36:54.308088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-31 16:36:54.308103 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-31 16:36:54.308113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-31 16:36:54.308135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.308146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.308156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-31 16:36:54.308166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.308180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.308668 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-31 16:36:54.308699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-31 16:36:54.308715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.308726 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-31 16:36:54.308744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.308754 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-31 16:36:54.308773 | orchestrator | 2025-05-31 16:36:54.308858 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-05-31 16:36:54.308872 | orchestrator | Saturday 31 May 2025 16:35:06 +0000 (0:00:14.547) 0:01:25.665 ********** 2025-05-31 16:36:54.308882 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:36:54.308892 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:36:54.308902 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:36:54.308925 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:36:54.308935 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:36:54.308943 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:36:54.308950 | orchestrator | 2025-05-31 16:36:54.308958 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-05-31 16:36:54.308966 | orchestrator | Saturday 31 May 2025 16:35:09 +0000 (0:00:03.606) 0:01:29.272 ********** 2025-05-31 16:36:54.308974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-31 16:36:54.308983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.308991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.309004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.309024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-31 16:36:54.309033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-31 16:36:54.309042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.309050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.309062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.309076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.309090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.309098 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:36:54.309107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.309115 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-31 16:36:54.309127 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.309141 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.309155 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.309163 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:36:54.309171 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:36:54.309179 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:36:54.309187 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-31 16:36:54.309195 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.309207 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.309224 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.309232 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:36:54.309245 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-31 16:36:54.309254 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.309264 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.309273 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.309286 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:36:54.309295 | orchestrator | 2025-05-31 16:36:54.309304 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-05-31 16:36:54.309313 | orchestrator | Saturday 31 May 2025 16:35:11 +0000 (0:00:02.169) 0:01:31.442 ********** 2025-05-31 16:36:54.309325 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:36:54.309334 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:36:54.309343 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:36:54.309352 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:36:54.309361 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:36:54.309370 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:36:54.309403 | orchestrator | 2025-05-31 16:36:54.309424 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-05-31 16:36:54.309433 | orchestrator | Saturday 31 May 2025 16:35:13 +0000 (0:00:01.154) 0:01:32.597 ********** 2025-05-31 16:36:54.309447 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-31 16:36:54.309458 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.309467 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-31 16:36:54.309482 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.309496 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-31 16:36:54.309505 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.309520 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-31 16:36:54.309530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-31 16:36:54.309539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-31 16:36:54.309557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-31 16:36:54.309571 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-31 16:36:54.309581 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-31 16:36:54.309590 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-31 16:36:54.309605 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-31 16:36:54.309614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-31 16:36:54.309627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.309641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.309650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-31 16:36:54.309658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-31 16:36:54.309671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.309683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.309695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.309704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-31 16:36:54.309712 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-31 16:36:54.309725 | orchestrator | 2025-05-31 16:36:54.309733 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-31 16:36:54.309741 | orchestrator | Saturday 31 May 2025 16:35:16 +0000 (0:00:03.712) 0:01:36.310 ********** 2025-05-31 16:36:54.309749 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:36:54.309757 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:36:54.309765 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:36:54.309772 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:36:54.309780 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:36:54.309811 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:36:54.309820 | orchestrator | 2025-05-31 16:36:54.309828 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-05-31 16:36:54.309835 | orchestrator | Saturday 31 May 2025 16:35:17 +0000 (0:00:00.575) 0:01:36.886 ********** 2025-05-31 16:36:54.309843 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:36:54.309851 | orchestrator | 2025-05-31 16:36:54.309859 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-05-31 16:36:54.309867 | orchestrator | Saturday 31 May 2025 16:35:19 +0000 (0:00:02.528) 0:01:39.414 ********** 2025-05-31 16:36:54.309874 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:36:54.309882 | orchestrator | 2025-05-31 16:36:54.309890 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-05-31 16:36:54.309898 | orchestrator | Saturday 31 May 2025 16:35:22 +0000 (0:00:02.469) 0:01:41.883 ********** 2025-05-31 16:36:54.309905 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:36:54.309913 | orchestrator | 2025-05-31 16:36:54.309921 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-31 16:36:54.309929 | orchestrator | Saturday 31 May 2025 16:35:42 +0000 (0:00:20.476) 0:02:02.359 ********** 2025-05-31 16:36:54.309936 | orchestrator | 2025-05-31 16:36:54.309944 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-31 16:36:54.309952 | orchestrator | Saturday 31 May 2025 16:35:42 +0000 (0:00:00.054) 0:02:02.414 ********** 2025-05-31 16:36:54.309960 | orchestrator | 2025-05-31 16:36:54.309968 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-31 16:36:54.309975 | orchestrator | Saturday 31 May 2025 16:35:43 +0000 (0:00:00.224) 0:02:02.639 ********** 2025-05-31 16:36:54.309983 | orchestrator | 2025-05-31 16:36:54.309991 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-31 16:36:54.310002 | orchestrator | Saturday 31 May 2025 16:35:43 +0000 (0:00:00.053) 0:02:02.693 ********** 2025-05-31 16:36:54.310011 | orchestrator | 2025-05-31 16:36:54.310045 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-31 16:36:54.310053 | orchestrator | Saturday 31 May 2025 16:35:43 +0000 (0:00:00.054) 0:02:02.747 ********** 2025-05-31 16:36:54.310060 | orchestrator | 2025-05-31 16:36:54.310068 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-31 16:36:54.310076 | orchestrator | Saturday 31 May 2025 16:35:43 +0000 (0:00:00.058) 0:02:02.805 ********** 2025-05-31 16:36:54.310084 | orchestrator | 2025-05-31 16:36:54.310091 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-05-31 16:36:54.310099 | orchestrator | Saturday 31 May 2025 16:35:43 +0000 (0:00:00.288) 0:02:03.094 ********** 2025-05-31 16:36:54.310107 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:36:54.310115 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:36:54.310122 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:36:54.310130 | orchestrator | 2025-05-31 16:36:54.310138 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-05-31 16:36:54.310145 | orchestrator | Saturday 31 May 2025 16:36:06 +0000 (0:00:22.755) 0:02:25.849 ********** 2025-05-31 16:36:54.310159 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:36:54.310167 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:36:54.310175 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:36:54.310182 | orchestrator | 2025-05-31 16:36:54.310190 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-05-31 16:36:54.310203 | orchestrator | Saturday 31 May 2025 16:36:16 +0000 (0:00:10.389) 0:02:36.239 ********** 2025-05-31 16:36:54.310212 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:36:54.310220 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:36:54.310227 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:36:54.310235 | orchestrator | 2025-05-31 16:36:54.310243 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-05-31 16:36:54.310251 | orchestrator | Saturday 31 May 2025 16:36:39 +0000 (0:00:23.045) 0:02:59.284 ********** 2025-05-31 16:36:54.310258 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:36:54.310266 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:36:54.310274 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:36:54.310282 | orchestrator | 2025-05-31 16:36:54.310289 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-05-31 16:36:54.310297 | orchestrator | Saturday 31 May 2025 16:36:50 +0000 (0:00:10.710) 0:03:09.995 ********** 2025-05-31 16:36:54.310305 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:36:54.310312 | orchestrator | 2025-05-31 16:36:54.310320 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:36:54.310329 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-31 16:36:54.310338 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-31 16:36:54.310346 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-31 16:36:54.310354 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-31 16:36:54.310361 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-31 16:36:54.310369 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-31 16:36:54.310377 | orchestrator | 2025-05-31 16:36:54.310385 | orchestrator | 2025-05-31 16:36:54.310392 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 16:36:54.310400 | orchestrator | Saturday 31 May 2025 16:36:51 +0000 (0:00:00.638) 0:03:10.633 ********** 2025-05-31 16:36:54.310408 | orchestrator | =============================================================================== 2025-05-31 16:36:54.310416 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 23.05s 2025-05-31 16:36:54.310424 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 22.76s 2025-05-31 16:36:54.310431 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 20.48s 2025-05-31 16:36:54.310439 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 14.55s 2025-05-31 16:36:54.310447 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.71s 2025-05-31 16:36:54.310455 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.39s 2025-05-31 16:36:54.310462 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 9.94s 2025-05-31 16:36:54.310472 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 7.31s 2025-05-31 16:36:54.310486 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.15s 2025-05-31 16:36:54.310505 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.07s 2025-05-31 16:36:54.310519 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.83s 2025-05-31 16:36:54.310533 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.71s 2025-05-31 16:36:54.310560 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.68s 2025-05-31 16:36:54.310576 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.67s 2025-05-31 16:36:54.310583 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 3.61s 2025-05-31 16:36:54.310591 | orchestrator | service-cert-copy : cinder | Copying over backend internal TLS key ------ 3.47s 2025-05-31 16:36:54.310599 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.43s 2025-05-31 16:36:54.310607 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.39s 2025-05-31 16:36:54.310615 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.35s 2025-05-31 16:36:54.310622 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.32s 2025-05-31 16:36:54.310630 | orchestrator | 2025-05-31 16:36:54 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:36:54.310638 | orchestrator | 2025-05-31 16:36:54 | INFO  | Task 16ec69d7-663f-43a0-9e67-cad1e631b115 is in state STARTED 2025-05-31 16:36:54.310646 | orchestrator | 2025-05-31 16:36:54 | INFO  | Task 09735340-1644-4453-acc2-12d5242d7b4f is in state STARTED 2025-05-31 16:36:54.310654 | orchestrator | 2025-05-31 16:36:54 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:36:57.351470 | orchestrator | 2025-05-31 16:36:57 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:36:57.353167 | orchestrator | 2025-05-31 16:36:57 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:36:57.354621 | orchestrator | 2025-05-31 16:36:57 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:36:57.356216 | orchestrator | 2025-05-31 16:36:57 | INFO  | Task 16ec69d7-663f-43a0-9e67-cad1e631b115 is in state STARTED 2025-05-31 16:36:57.358318 | orchestrator | 2025-05-31 16:36:57 | INFO  | Task 09735340-1644-4453-acc2-12d5242d7b4f is in state STARTED 2025-05-31 16:36:57.358411 | orchestrator | 2025-05-31 16:36:57 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:37:00.403760 | orchestrator | 2025-05-31 16:37:00 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:37:00.404981 | orchestrator | 2025-05-31 16:37:00 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:37:00.407334 | orchestrator | 2025-05-31 16:37:00 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:37:00.408807 | orchestrator | 2025-05-31 16:37:00 | INFO  | Task 16ec69d7-663f-43a0-9e67-cad1e631b115 is in state STARTED 2025-05-31 16:37:00.411456 | orchestrator | 2025-05-31 16:37:00 | INFO  | Task 09735340-1644-4453-acc2-12d5242d7b4f is in state STARTED 2025-05-31 16:37:00.411492 | orchestrator | 2025-05-31 16:37:00 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:37:03.474575 | orchestrator | 2025-05-31 16:37:03 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:37:03.476308 | orchestrator | 2025-05-31 16:37:03 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:37:03.478072 | orchestrator | 2025-05-31 16:37:03 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:37:03.479173 | orchestrator | 2025-05-31 16:37:03 | INFO  | Task 16ec69d7-663f-43a0-9e67-cad1e631b115 is in state STARTED 2025-05-31 16:37:03.480632 | orchestrator | 2025-05-31 16:37:03 | INFO  | Task 09735340-1644-4453-acc2-12d5242d7b4f is in state STARTED 2025-05-31 16:37:03.480657 | orchestrator | 2025-05-31 16:37:03 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:37:06.528491 | orchestrator | 2025-05-31 16:37:06 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:37:06.529770 | orchestrator | 2025-05-31 16:37:06 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:37:06.531752 | orchestrator | 2025-05-31 16:37:06 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:37:06.533626 | orchestrator | 2025-05-31 16:37:06 | INFO  | Task 16ec69d7-663f-43a0-9e67-cad1e631b115 is in state STARTED 2025-05-31 16:37:06.535100 | orchestrator | 2025-05-31 16:37:06 | INFO  | Task 09735340-1644-4453-acc2-12d5242d7b4f is in state STARTED 2025-05-31 16:37:06.535300 | orchestrator | 2025-05-31 16:37:06 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:37:09.595379 | orchestrator | 2025-05-31 16:37:09 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:37:09.596154 | orchestrator | 2025-05-31 16:37:09 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:37:09.597240 | orchestrator | 2025-05-31 16:37:09 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:37:09.598482 | orchestrator | 2025-05-31 16:37:09 | INFO  | Task 16ec69d7-663f-43a0-9e67-cad1e631b115 is in state STARTED 2025-05-31 16:37:09.599349 | orchestrator | 2025-05-31 16:37:09 | INFO  | Task 09735340-1644-4453-acc2-12d5242d7b4f is in state STARTED 2025-05-31 16:37:09.602057 | orchestrator | 2025-05-31 16:37:09 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:37:12.644237 | orchestrator | 2025-05-31 16:37:12 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:37:12.646716 | orchestrator | 2025-05-31 16:37:12 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:37:12.648948 | orchestrator | 2025-05-31 16:37:12 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:37:12.650979 | orchestrator | 2025-05-31 16:37:12 | INFO  | Task 16ec69d7-663f-43a0-9e67-cad1e631b115 is in state STARTED 2025-05-31 16:37:12.652488 | orchestrator | 2025-05-31 16:37:12 | INFO  | Task 09735340-1644-4453-acc2-12d5242d7b4f is in state STARTED 2025-05-31 16:37:12.652638 | orchestrator | 2025-05-31 16:37:12 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:37:15.710499 | orchestrator | 2025-05-31 16:37:15 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:37:15.711978 | orchestrator | 2025-05-31 16:37:15 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:37:15.716907 | orchestrator | 2025-05-31 16:37:15 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:37:15.719213 | orchestrator | 2025-05-31 16:37:15 | INFO  | Task 16ec69d7-663f-43a0-9e67-cad1e631b115 is in state STARTED 2025-05-31 16:37:15.722230 | orchestrator | 2025-05-31 16:37:15 | INFO  | Task 09735340-1644-4453-acc2-12d5242d7b4f is in state STARTED 2025-05-31 16:37:15.722272 | orchestrator | 2025-05-31 16:37:15 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:37:18.767031 | orchestrator | 2025-05-31 16:37:18 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:37:18.768355 | orchestrator | 2025-05-31 16:37:18 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:37:18.769932 | orchestrator | 2025-05-31 16:37:18 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:37:18.774189 | orchestrator | 2025-05-31 16:37:18 | INFO  | Task 16ec69d7-663f-43a0-9e67-cad1e631b115 is in state STARTED 2025-05-31 16:37:18.774280 | orchestrator | 2025-05-31 16:37:18 | INFO  | Task 09735340-1644-4453-acc2-12d5242d7b4f is in state STARTED 2025-05-31 16:37:18.774295 | orchestrator | 2025-05-31 16:37:18 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:37:21.821289 | orchestrator | 2025-05-31 16:37:21 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:37:21.822558 | orchestrator | 2025-05-31 16:37:21 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:37:21.825946 | orchestrator | 2025-05-31 16:37:21 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:37:21.828996 | orchestrator | 2025-05-31 16:37:21 | INFO  | Task 16ec69d7-663f-43a0-9e67-cad1e631b115 is in state STARTED 2025-05-31 16:37:21.831534 | orchestrator | 2025-05-31 16:37:21 | INFO  | Task 09735340-1644-4453-acc2-12d5242d7b4f is in state STARTED 2025-05-31 16:37:21.831901 | orchestrator | 2025-05-31 16:37:21 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:37:24.871505 | orchestrator | 2025-05-31 16:37:24 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:37:24.875170 | orchestrator | 2025-05-31 16:37:24 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:37:24.879652 | orchestrator | 2025-05-31 16:37:24 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:37:24.880665 | orchestrator | 2025-05-31 16:37:24 | INFO  | Task 16ec69d7-663f-43a0-9e67-cad1e631b115 is in state STARTED 2025-05-31 16:37:24.881390 | orchestrator | 2025-05-31 16:37:24 | INFO  | Task 09735340-1644-4453-acc2-12d5242d7b4f is in state STARTED 2025-05-31 16:37:24.881524 | orchestrator | 2025-05-31 16:37:24 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:37:27.920082 | orchestrator | 2025-05-31 16:37:27 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:37:27.920201 | orchestrator | 2025-05-31 16:37:27 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:37:27.921165 | orchestrator | 2025-05-31 16:37:27 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:37:27.922742 | orchestrator | 2025-05-31 16:37:27 | INFO  | Task 16ec69d7-663f-43a0-9e67-cad1e631b115 is in state STARTED 2025-05-31 16:37:27.922839 | orchestrator | 2025-05-31 16:37:27 | INFO  | Task 09735340-1644-4453-acc2-12d5242d7b4f is in state STARTED 2025-05-31 16:37:27.922858 | orchestrator | 2025-05-31 16:37:27 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:37:30.969632 | orchestrator | 2025-05-31 16:37:30 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:37:30.970233 | orchestrator | 2025-05-31 16:37:30 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:37:30.972089 | orchestrator | 2025-05-31 16:37:30 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:37:30.973964 | orchestrator | 2025-05-31 16:37:30 | INFO  | Task 16ec69d7-663f-43a0-9e67-cad1e631b115 is in state STARTED 2025-05-31 16:37:30.976334 | orchestrator | 2025-05-31 16:37:30 | INFO  | Task 09735340-1644-4453-acc2-12d5242d7b4f is in state STARTED 2025-05-31 16:37:30.976713 | orchestrator | 2025-05-31 16:37:30 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:37:34.017523 | orchestrator | 2025-05-31 16:37:34 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:37:34.018827 | orchestrator | 2025-05-31 16:37:34 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:37:34.019293 | orchestrator | 2025-05-31 16:37:34 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:37:34.020465 | orchestrator | 2025-05-31 16:37:34 | INFO  | Task 16ec69d7-663f-43a0-9e67-cad1e631b115 is in state STARTED 2025-05-31 16:37:34.021320 | orchestrator | 2025-05-31 16:37:34 | INFO  | Task 09735340-1644-4453-acc2-12d5242d7b4f is in state STARTED 2025-05-31 16:37:34.021343 | orchestrator | 2025-05-31 16:37:34 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:37:37.071930 | orchestrator | 2025-05-31 16:37:37 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:37:37.072581 | orchestrator | 2025-05-31 16:37:37 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:37:37.073550 | orchestrator | 2025-05-31 16:37:37 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:37:37.074886 | orchestrator | 2025-05-31 16:37:37 | INFO  | Task 16ec69d7-663f-43a0-9e67-cad1e631b115 is in state STARTED 2025-05-31 16:37:37.076554 | orchestrator | 2025-05-31 16:37:37 | INFO  | Task 09735340-1644-4453-acc2-12d5242d7b4f is in state STARTED 2025-05-31 16:37:37.076578 | orchestrator | 2025-05-31 16:37:37 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:37:40.129729 | orchestrator | 2025-05-31 16:37:40 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:37:40.130162 | orchestrator | 2025-05-31 16:37:40 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:37:40.133132 | orchestrator | 2025-05-31 16:37:40 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:37:40.134228 | orchestrator | 2025-05-31 16:37:40 | INFO  | Task 16ec69d7-663f-43a0-9e67-cad1e631b115 is in state STARTED 2025-05-31 16:37:40.135282 | orchestrator | 2025-05-31 16:37:40 | INFO  | Task 09735340-1644-4453-acc2-12d5242d7b4f is in state STARTED 2025-05-31 16:37:40.135513 | orchestrator | 2025-05-31 16:37:40 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:37:43.183616 | orchestrator | 2025-05-31 16:37:43 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:37:43.184586 | orchestrator | 2025-05-31 16:37:43 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:37:43.185655 | orchestrator | 2025-05-31 16:37:43 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:37:43.186712 | orchestrator | 2025-05-31 16:37:43 | INFO  | Task 16ec69d7-663f-43a0-9e67-cad1e631b115 is in state STARTED 2025-05-31 16:37:43.188798 | orchestrator | 2025-05-31 16:37:43 | INFO  | Task 09735340-1644-4453-acc2-12d5242d7b4f is in state STARTED 2025-05-31 16:37:43.188821 | orchestrator | 2025-05-31 16:37:43 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:37:46.248077 | orchestrator | 2025-05-31 16:37:46 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:37:46.249050 | orchestrator | 2025-05-31 16:37:46 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:37:46.250179 | orchestrator | 2025-05-31 16:37:46 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:37:46.251420 | orchestrator | 2025-05-31 16:37:46 | INFO  | Task 16ec69d7-663f-43a0-9e67-cad1e631b115 is in state STARTED 2025-05-31 16:37:46.252654 | orchestrator | 2025-05-31 16:37:46 | INFO  | Task 09735340-1644-4453-acc2-12d5242d7b4f is in state STARTED 2025-05-31 16:37:46.252711 | orchestrator | 2025-05-31 16:37:46 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:37:49.301712 | orchestrator | 2025-05-31 16:37:49 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:37:49.302643 | orchestrator | 2025-05-31 16:37:49 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:37:49.304210 | orchestrator | 2025-05-31 16:37:49 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:37:49.305233 | orchestrator | 2025-05-31 16:37:49 | INFO  | Task 16ec69d7-663f-43a0-9e67-cad1e631b115 is in state STARTED 2025-05-31 16:37:49.306908 | orchestrator | 2025-05-31 16:37:49 | INFO  | Task 09735340-1644-4453-acc2-12d5242d7b4f is in state SUCCESS 2025-05-31 16:37:49.306945 | orchestrator | 2025-05-31 16:37:49 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:37:52.356653 | orchestrator | 2025-05-31 16:37:52 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:37:52.358733 | orchestrator | 2025-05-31 16:37:52 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:37:52.360424 | orchestrator | 2025-05-31 16:37:52 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:37:52.362343 | orchestrator | 2025-05-31 16:37:52 | INFO  | Task 16ec69d7-663f-43a0-9e67-cad1e631b115 is in state STARTED 2025-05-31 16:37:52.362386 | orchestrator | 2025-05-31 16:37:52 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:37:55.415025 | orchestrator | 2025-05-31 16:37:55 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:37:55.415694 | orchestrator | 2025-05-31 16:37:55 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:37:55.417827 | orchestrator | 2025-05-31 16:37:55 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:37:55.418832 | orchestrator | 2025-05-31 16:37:55 | INFO  | Task 16ec69d7-663f-43a0-9e67-cad1e631b115 is in state STARTED 2025-05-31 16:37:55.419052 | orchestrator | 2025-05-31 16:37:55 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:37:58.471232 | orchestrator | 2025-05-31 16:37:58 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:37:58.472267 | orchestrator | 2025-05-31 16:37:58 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:37:58.473799 | orchestrator | 2025-05-31 16:37:58 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:37:58.475177 | orchestrator | 2025-05-31 16:37:58 | INFO  | Task 16ec69d7-663f-43a0-9e67-cad1e631b115 is in state STARTED 2025-05-31 16:37:58.475206 | orchestrator | 2025-05-31 16:37:58 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:38:01.526706 | orchestrator | 2025-05-31 16:38:01 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:38:01.528133 | orchestrator | 2025-05-31 16:38:01 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:38:01.529644 | orchestrator | 2025-05-31 16:38:01 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:38:01.531937 | orchestrator | 2025-05-31 16:38:01 | INFO  | Task 16ec69d7-663f-43a0-9e67-cad1e631b115 is in state STARTED 2025-05-31 16:38:01.531967 | orchestrator | 2025-05-31 16:38:01 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:38:04.589279 | orchestrator | 2025-05-31 16:38:04 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:38:04.590238 | orchestrator | 2025-05-31 16:38:04 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:38:04.592240 | orchestrator | 2025-05-31 16:38:04 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:38:04.593849 | orchestrator | 2025-05-31 16:38:04 | INFO  | Task 16ec69d7-663f-43a0-9e67-cad1e631b115 is in state STARTED 2025-05-31 16:38:04.593880 | orchestrator | 2025-05-31 16:38:04 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:38:07.641428 | orchestrator | 2025-05-31 16:38:07 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:38:07.642489 | orchestrator | 2025-05-31 16:38:07 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:38:07.644641 | orchestrator | 2025-05-31 16:38:07 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:38:07.647488 | orchestrator | 2025-05-31 16:38:07 | INFO  | Task 16ec69d7-663f-43a0-9e67-cad1e631b115 is in state STARTED 2025-05-31 16:38:07.647515 | orchestrator | 2025-05-31 16:38:07 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:38:10.700954 | orchestrator | 2025-05-31 16:38:10 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:38:10.701881 | orchestrator | 2025-05-31 16:38:10 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:38:10.704990 | orchestrator | 2025-05-31 16:38:10 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:38:10.706878 | orchestrator | 2025-05-31 16:38:10 | INFO  | Task 16ec69d7-663f-43a0-9e67-cad1e631b115 is in state STARTED 2025-05-31 16:38:10.706939 | orchestrator | 2025-05-31 16:38:10 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:38:13.754754 | orchestrator | 2025-05-31 16:38:13 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:38:13.756145 | orchestrator | 2025-05-31 16:38:13 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:38:13.757521 | orchestrator | 2025-05-31 16:38:13 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:38:13.759232 | orchestrator | 2025-05-31 16:38:13 | INFO  | Task 16ec69d7-663f-43a0-9e67-cad1e631b115 is in state STARTED 2025-05-31 16:38:13.759382 | orchestrator | 2025-05-31 16:38:13 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:38:16.815848 | orchestrator | 2025-05-31 16:38:16 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:38:16.818878 | orchestrator | 2025-05-31 16:38:16 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:38:16.823568 | orchestrator | 2025-05-31 16:38:16 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:38:16.827261 | orchestrator | 2025-05-31 16:38:16 | INFO  | Task 16ec69d7-663f-43a0-9e67-cad1e631b115 is in state STARTED 2025-05-31 16:38:16.827296 | orchestrator | 2025-05-31 16:38:16 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:38:19.883639 | orchestrator | 2025-05-31 16:38:19 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:38:19.888330 | orchestrator | 2025-05-31 16:38:19 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:38:19.888406 | orchestrator | 2025-05-31 16:38:19 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:38:19.891542 | orchestrator | 2025-05-31 16:38:19 | INFO  | Task 16ec69d7-663f-43a0-9e67-cad1e631b115 is in state STARTED 2025-05-31 16:38:19.892506 | orchestrator | 2025-05-31 16:38:19 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:38:22.943322 | orchestrator | 2025-05-31 16:38:22 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:38:22.945346 | orchestrator | 2025-05-31 16:38:22 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:38:22.946073 | orchestrator | 2025-05-31 16:38:22 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:38:22.947216 | orchestrator | 2025-05-31 16:38:22 | INFO  | Task 16ec69d7-663f-43a0-9e67-cad1e631b115 is in state STARTED 2025-05-31 16:38:22.947405 | orchestrator | 2025-05-31 16:38:22 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:38:25.994236 | orchestrator | 2025-05-31 16:38:25 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:38:25.996302 | orchestrator | 2025-05-31 16:38:25 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:38:25.998634 | orchestrator | 2025-05-31 16:38:25 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:38:26.000200 | orchestrator | 2025-05-31 16:38:25 | INFO  | Task 16ec69d7-663f-43a0-9e67-cad1e631b115 is in state STARTED 2025-05-31 16:38:26.000448 | orchestrator | 2025-05-31 16:38:25 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:38:29.044204 | orchestrator | 2025-05-31 16:38:29 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:38:29.046157 | orchestrator | 2025-05-31 16:38:29 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:38:29.048261 | orchestrator | 2025-05-31 16:38:29 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:38:29.050571 | orchestrator | 2025-05-31 16:38:29 | INFO  | Task 16ec69d7-663f-43a0-9e67-cad1e631b115 is in state STARTED 2025-05-31 16:38:29.051169 | orchestrator | 2025-05-31 16:38:29 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:38:32.100253 | orchestrator | 2025-05-31 16:38:32 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:38:32.101579 | orchestrator | 2025-05-31 16:38:32 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:38:32.104286 | orchestrator | 2025-05-31 16:38:32 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:38:32.106585 | orchestrator | 2025-05-31 16:38:32 | INFO  | Task 16ec69d7-663f-43a0-9e67-cad1e631b115 is in state STARTED 2025-05-31 16:38:32.106971 | orchestrator | 2025-05-31 16:38:32 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:38:35.174287 | orchestrator | 2025-05-31 16:38:35 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:38:35.176205 | orchestrator | 2025-05-31 16:38:35 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:38:35.177866 | orchestrator | 2025-05-31 16:38:35 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:38:35.180069 | orchestrator | 2025-05-31 16:38:35 | INFO  | Task 16ec69d7-663f-43a0-9e67-cad1e631b115 is in state STARTED 2025-05-31 16:38:35.180155 | orchestrator | 2025-05-31 16:38:35 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:38:38.224681 | orchestrator | 2025-05-31 16:38:38 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:38:38.225974 | orchestrator | 2025-05-31 16:38:38 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:38:38.227420 | orchestrator | 2025-05-31 16:38:38 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:38:38.228648 | orchestrator | 2025-05-31 16:38:38 | INFO  | Task 16ec69d7-663f-43a0-9e67-cad1e631b115 is in state STARTED 2025-05-31 16:38:38.228673 | orchestrator | 2025-05-31 16:38:38 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:38:41.270811 | orchestrator | 2025-05-31 16:38:41 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:38:41.272299 | orchestrator | 2025-05-31 16:38:41 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:38:41.275899 | orchestrator | 2025-05-31 16:38:41 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:38:41.277445 | orchestrator | 2025-05-31 16:38:41 | INFO  | Task 16ec69d7-663f-43a0-9e67-cad1e631b115 is in state STARTED 2025-05-31 16:38:41.277471 | orchestrator | 2025-05-31 16:38:41 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:38:44.340094 | orchestrator | 2025-05-31 16:38:44 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:38:44.341743 | orchestrator | 2025-05-31 16:38:44 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:38:44.342913 | orchestrator | 2025-05-31 16:38:44 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:38:44.344716 | orchestrator | 2025-05-31 16:38:44 | INFO  | Task 16ec69d7-663f-43a0-9e67-cad1e631b115 is in state SUCCESS 2025-05-31 16:38:44.345397 | orchestrator | 2025-05-31 16:38:44 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:38:44.346938 | orchestrator | 2025-05-31 16:38:44.346972 | orchestrator | 2025-05-31 16:38:44.346984 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-31 16:38:44.347058 | orchestrator | 2025-05-31 16:38:44.347070 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-31 16:38:44.347082 | orchestrator | Saturday 31 May 2025 16:36:50 +0000 (0:00:00.318) 0:00:00.318 ********** 2025-05-31 16:38:44.347093 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:38:44.347106 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:38:44.347128 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:38:44.347139 | orchestrator | 2025-05-31 16:38:44.347173 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-31 16:38:44.347185 | orchestrator | Saturday 31 May 2025 16:36:51 +0000 (0:00:00.405) 0:00:00.723 ********** 2025-05-31 16:38:44.347221 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-05-31 16:38:44.347233 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-05-31 16:38:44.347244 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-05-31 16:38:44.347364 | orchestrator | 2025-05-31 16:38:44.347414 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-05-31 16:38:44.347426 | orchestrator | 2025-05-31 16:38:44.347438 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-31 16:38:44.347449 | orchestrator | Saturday 31 May 2025 16:36:51 +0000 (0:00:00.341) 0:00:01.065 ********** 2025-05-31 16:38:44.347459 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:38:44.347471 | orchestrator | 2025-05-31 16:38:44.347482 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-05-31 16:38:44.347492 | orchestrator | Saturday 31 May 2025 16:36:52 +0000 (0:00:00.752) 0:00:01.818 ********** 2025-05-31 16:38:44.347505 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-05-31 16:38:44.347515 | orchestrator | 2025-05-31 16:38:44.347526 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-05-31 16:38:44.347538 | orchestrator | Saturday 31 May 2025 16:36:55 +0000 (0:00:03.471) 0:00:05.289 ********** 2025-05-31 16:38:44.347551 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-05-31 16:38:44.347585 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-05-31 16:38:44.347597 | orchestrator | 2025-05-31 16:38:44.347609 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-05-31 16:38:44.347622 | orchestrator | Saturday 31 May 2025 16:37:02 +0000 (0:00:07.093) 0:00:12.382 ********** 2025-05-31 16:38:44.347634 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-31 16:38:44.347647 | orchestrator | 2025-05-31 16:38:44.347659 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-05-31 16:38:44.347671 | orchestrator | Saturday 31 May 2025 16:37:06 +0000 (0:00:03.597) 0:00:15.980 ********** 2025-05-31 16:38:44.347683 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-31 16:38:44.347718 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-05-31 16:38:44.347731 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-05-31 16:38:44.347742 | orchestrator | 2025-05-31 16:38:44.347754 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-05-31 16:38:44.347766 | orchestrator | Saturday 31 May 2025 16:37:15 +0000 (0:00:08.719) 0:00:24.699 ********** 2025-05-31 16:38:44.347778 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-31 16:38:44.347790 | orchestrator | 2025-05-31 16:38:44.347802 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-05-31 16:38:44.347815 | orchestrator | Saturday 31 May 2025 16:37:18 +0000 (0:00:03.466) 0:00:28.166 ********** 2025-05-31 16:38:44.347827 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-05-31 16:38:44.347839 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-05-31 16:38:44.347849 | orchestrator | 2025-05-31 16:38:44.347859 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-05-31 16:38:44.347870 | orchestrator | Saturday 31 May 2025 16:37:27 +0000 (0:00:08.238) 0:00:36.404 ********** 2025-05-31 16:38:44.347881 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-05-31 16:38:44.347891 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-05-31 16:38:44.347902 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-05-31 16:38:44.347912 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-05-31 16:38:44.347923 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-05-31 16:38:44.347933 | orchestrator | 2025-05-31 16:38:44.347944 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-31 16:38:44.347954 | orchestrator | Saturday 31 May 2025 16:37:43 +0000 (0:00:16.597) 0:00:53.002 ********** 2025-05-31 16:38:44.347965 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:38:44.347976 | orchestrator | 2025-05-31 16:38:44.347986 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-05-31 16:38:44.348011 | orchestrator | Saturday 31 May 2025 16:37:44 +0000 (0:00:00.767) 0:00:53.770 ********** 2025-05-31 16:38:44.348040 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "os_nova_flavor", "changed": false, "extra_data": {"data": null, "details": "503 Service Unavailable: No server is available to handle this request.: ", "response": "

503 Service Unavailable

\nNo server is available to handle this request.\n\n"}, "msg": "HttpException: 503: Server Error for url: https://api-int.testbed.osism.xyz:8774/v2.1/flavors/amphora, 503 Service Unavailable: No server is available to handle this request.: "} 2025-05-31 16:38:44.348056 | orchestrator | 2025-05-31 16:38:44.348067 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:38:44.348079 | orchestrator | testbed-node-0 : ok=11  changed=5  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-05-31 16:38:44.348099 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:38:44.348111 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:38:44.348122 | orchestrator | 2025-05-31 16:38:44.348132 | orchestrator | 2025-05-31 16:38:44.348150 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 16:38:44.348161 | orchestrator | Saturday 31 May 2025 16:37:47 +0000 (0:00:03.414) 0:00:57.184 ********** 2025-05-31 16:38:44.348171 | orchestrator | =============================================================================== 2025-05-31 16:38:44.348182 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.60s 2025-05-31 16:38:44.348193 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.72s 2025-05-31 16:38:44.348204 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 8.24s 2025-05-31 16:38:44.348214 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 7.09s 2025-05-31 16:38:44.348225 | orchestrator | service-ks-register : octavia | Creating projects ----------------------- 3.60s 2025-05-31 16:38:44.348236 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 3.47s 2025-05-31 16:38:44.348246 | orchestrator | service-ks-register : octavia | Creating roles -------------------------- 3.47s 2025-05-31 16:38:44.348257 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 3.41s 2025-05-31 16:38:44.348268 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.77s 2025-05-31 16:38:44.348278 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.75s 2025-05-31 16:38:44.348289 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.41s 2025-05-31 16:38:44.348300 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.34s 2025-05-31 16:38:44.348310 | orchestrator | 2025-05-31 16:38:44.348321 | orchestrator | 2025-05-31 16:38:44.348332 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-31 16:38:44.348342 | orchestrator | 2025-05-31 16:38:44.348354 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-31 16:38:44.348364 | orchestrator | Saturday 31 May 2025 16:36:54 +0000 (0:00:00.297) 0:00:00.297 ********** 2025-05-31 16:38:44.348375 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:38:44.348386 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:38:44.348396 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:38:44.348407 | orchestrator | 2025-05-31 16:38:44.348418 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-31 16:38:44.348428 | orchestrator | Saturday 31 May 2025 16:36:54 +0000 (0:00:00.378) 0:00:00.675 ********** 2025-05-31 16:38:44.348439 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-05-31 16:38:44.348450 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-05-31 16:38:44.348461 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-05-31 16:38:44.348472 | orchestrator | 2025-05-31 16:38:44.348483 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-05-31 16:38:44.348494 | orchestrator | 2025-05-31 16:38:44.348504 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-05-31 16:38:44.348515 | orchestrator | Saturday 31 May 2025 16:36:55 +0000 (0:00:00.288) 0:00:00.964 ********** 2025-05-31 16:38:44.348526 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:38:44.348536 | orchestrator | 2025-05-31 16:38:44.348547 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-05-31 16:38:44.348558 | orchestrator | Saturday 31 May 2025 16:36:55 +0000 (0:00:00.699) 0:00:01.663 ********** 2025-05-31 16:38:44.348570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-31 16:38:44.348597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-31 16:38:44.348615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-31 16:38:44.348626 | orchestrator | 2025-05-31 16:38:44.348637 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-05-31 16:38:44.348648 | orchestrator | Saturday 31 May 2025 16:36:56 +0000 (0:00:00.822) 0:00:02.486 ********** 2025-05-31 16:38:44.348658 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-05-31 16:38:44.348669 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-05-31 16:38:44.348680 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-31 16:38:44.348708 | orchestrator | 2025-05-31 16:38:44.348720 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-05-31 16:38:44.348730 | orchestrator | Saturday 31 May 2025 16:36:57 +0000 (0:00:00.507) 0:00:02.994 ********** 2025-05-31 16:38:44.348741 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:38:44.348752 | orchestrator | 2025-05-31 16:38:44.348762 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-05-31 16:38:44.348773 | orchestrator | Saturday 31 May 2025 16:36:57 +0000 (0:00:00.566) 0:00:03.560 ********** 2025-05-31 16:38:44.348784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-31 16:38:44.348796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-31 16:38:44.348817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-31 16:38:44.348828 | orchestrator | 2025-05-31 16:38:44.348839 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-05-31 16:38:44.348849 | orchestrator | Saturday 31 May 2025 16:36:59 +0000 (0:00:01.458) 0:00:05.019 ********** 2025-05-31 16:38:44.348868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-31 16:38:44.348881 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:38:44.348897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-31 16:38:44.348909 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:38:44.348920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-31 16:38:44.348932 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:38:44.348943 | orchestrator | 2025-05-31 16:38:44.348953 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-05-31 16:38:44.348964 | orchestrator | Saturday 31 May 2025 16:36:59 +0000 (0:00:00.718) 0:00:05.738 ********** 2025-05-31 16:38:44.348982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-31 16:38:44.348994 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:38:44.349005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-31 16:38:44.349016 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:38:44.349034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-31 16:38:44.349046 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:38:44.349057 | orchestrator | 2025-05-31 16:38:44.349067 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-05-31 16:38:44.349078 | orchestrator | Saturday 31 May 2025 16:37:00 +0000 (0:00:00.649) 0:00:06.387 ********** 2025-05-31 16:38:44.349094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-31 16:38:44.349106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-31 16:38:44.349117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-31 16:38:44.349135 | orchestrator | 2025-05-31 16:38:44.349146 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-05-31 16:38:44.349156 | orchestrator | Saturday 31 May 2025 16:37:01 +0000 (0:00:01.374) 0:00:07.762 ********** 2025-05-31 16:38:44.349167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-31 16:38:44.349185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-31 16:38:44.349198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-31 16:38:44.349209 | orchestrator | 2025-05-31 16:38:44.349224 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-05-31 16:38:44.349235 | orchestrator | Saturday 31 May 2025 16:37:03 +0000 (0:00:01.567) 0:00:09.330 ********** 2025-05-31 16:38:44.349246 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:38:44.349257 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:38:44.349267 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:38:44.349278 | orchestrator | 2025-05-31 16:38:44.349289 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-05-31 16:38:44.349299 | orchestrator | Saturday 31 May 2025 16:37:03 +0000 (0:00:00.284) 0:00:09.614 ********** 2025-05-31 16:38:44.349310 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-31 16:38:44.349320 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-31 16:38:44.349331 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-31 16:38:44.349348 | orchestrator | 2025-05-31 16:38:44.349358 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-05-31 16:38:44.349369 | orchestrator | Saturday 31 May 2025 16:37:05 +0000 (0:00:01.468) 0:00:11.083 ********** 2025-05-31 16:38:44.349379 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-31 16:38:44.349390 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-31 16:38:44.349401 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-31 16:38:44.349412 | orchestrator | 2025-05-31 16:38:44.349422 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-05-31 16:38:44.349433 | orchestrator | Saturday 31 May 2025 16:37:06 +0000 (0:00:01.411) 0:00:12.494 ********** 2025-05-31 16:38:44.349443 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-31 16:38:44.349454 | orchestrator | 2025-05-31 16:38:44.349464 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-05-31 16:38:44.349475 | orchestrator | Saturday 31 May 2025 16:37:07 +0000 (0:00:00.475) 0:00:12.970 ********** 2025-05-31 16:38:44.349485 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-05-31 16:38:44.349495 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-05-31 16:38:44.349506 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:38:44.349517 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:38:44.349527 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:38:44.349538 | orchestrator | 2025-05-31 16:38:44.349548 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-05-31 16:38:44.349559 | orchestrator | Saturday 31 May 2025 16:37:07 +0000 (0:00:00.847) 0:00:13.817 ********** 2025-05-31 16:38:44.349569 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:38:44.349580 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:38:44.349591 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:38:44.349601 | orchestrator | 2025-05-31 16:38:44.349612 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-05-31 16:38:44.349622 | orchestrator | Saturday 31 May 2025 16:37:08 +0000 (0:00:00.417) 0:00:14.234 ********** 2025-05-31 16:38:44.349633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1088398, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.8987565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.349653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1088398, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.8987565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.349669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1088398, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.8987565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.349724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1088392, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.868756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.349737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1088392, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.868756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.349748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1088392, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.868756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.349760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1088387, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.866756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.349777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1088387, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.866756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.349789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1088387, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.866756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.349812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1088396, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.8707561, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.349824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1088396, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.8707561, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.349835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1088396, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.8707561, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.349846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1088383, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.863756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.349857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1088383, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.863756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.350423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1088383, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.863756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.350466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1088389, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.8677561, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.350478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1088389, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.8677561, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.350490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1088389, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.8677561, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.350501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1088395, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.869756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.350513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1088395, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.869756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.350534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1088395, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.869756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.350557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1088382, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.862756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.350569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1088382, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.862756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.350581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1088382, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.862756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.350592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1088322, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.8377554, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.350604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1088322, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.8377554, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.350615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1088322, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.8377554, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.350639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1088384, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.8647559, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.350656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1088384, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.8647559, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.350668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1088384, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.8647559, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.350679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1088324, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.8397555, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.350712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1088324, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.8397555, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.350724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1088324, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.8397555, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.350741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1088393, 'dev': 182, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748706248.869756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.350764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1088393, 'dev': 182, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748706248.869756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.350775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1088393, 'dev': 182, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748706248.869756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.350787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1088385, 'dev': 182, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748706248.865756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.350798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1088385, 'dev': 182, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748706248.865756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.350837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1088385, 'dev': 182, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748706248.865756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.350854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1088397, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.871756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.350873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1088397, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.871756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.350890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1088397, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.871756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.350901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1088340, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.8477557, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.350913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1088340, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.8477557, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.350924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1088340, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.8477557, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.350935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1088391, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.8677561, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.350959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1088391, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.8677561, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.350976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1088391, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.8677561, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.350988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1088323, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.8397555, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.350999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1088323, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.8397555, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1088323, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.8397555, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1088326, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.8417556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1088326, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.8417556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1088326, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.8417556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1088386, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.866756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1088386, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.866756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1088386, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.866756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1089052, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.0677593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1089052, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.0677593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1089052, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.0677593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1089042, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.0587592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1089042, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.0587592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1089042, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.0587592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1089112, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.0727594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1089112, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.0727594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1089112, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.0727594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1088453, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.9007566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1088453, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.9007566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1088453, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.9007566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1089126, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1237602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1089126, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1237602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1089126, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1237602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1089088, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.0687594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1089088, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.0687594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1089088, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.0687594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1089095, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.0697594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1089095, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.0697594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1089095, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.0697594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1088455, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.9017565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1088455, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.9017565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1088455, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.9017565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1089048, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.0597591, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1089048, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.0597591, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1089048, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.0597591, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1089332, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1247604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1089332, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1247604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1089332, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1247604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1089101, 'dev': 182, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748706249.0707595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1089101, 'dev': 182, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748706249.0707595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1089101, 'dev': 182, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748706249.0707595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1088470, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.9047565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1088470, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.9047565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1088470, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.9047565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1088459, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.9027567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1088459, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.9027567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1088459, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.9027567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1088481, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.9067566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1088481, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.9067566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1088481, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706248.9067566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1088486, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.0577593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1088486, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.0577593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1088486, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.0577593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1089336, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1257603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1089336, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1257603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1089336, 'dev': 182, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748706249.1257603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-31 16:38:44.351869 | orchestrator | 2025-05-31 16:38:44.351880 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-05-31 16:38:44.351897 | orchestrator | Saturday 31 May 2025 16:37:42 +0000 (0:00:33.693) 0:00:47.927 ********** 2025-05-31 16:38:44.351908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-31 16:38:44.351919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-31 16:38:44.351931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-31 16:38:44.351942 | orchestrator | 2025-05-31 16:38:44.351953 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-05-31 16:38:44.351964 | orchestrator | Saturday 31 May 2025 16:37:43 +0000 (0:00:01.080) 0:00:49.008 ********** 2025-05-31 16:38:44.351975 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:38:44.351986 | orchestrator | 2025-05-31 16:38:44.352002 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-05-31 16:38:44.352013 | orchestrator | Saturday 31 May 2025 16:37:45 +0000 (0:00:02.668) 0:00:51.676 ********** 2025-05-31 16:38:44.352024 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:38:44.352035 | orchestrator | 2025-05-31 16:38:44.352046 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-31 16:38:44.352056 | orchestrator | Saturday 31 May 2025 16:37:48 +0000 (0:00:02.356) 0:00:54.033 ********** 2025-05-31 16:38:44.352067 | orchestrator | 2025-05-31 16:38:44.352078 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-31 16:38:44.352088 | orchestrator | Saturday 31 May 2025 16:37:48 +0000 (0:00:00.056) 0:00:54.090 ********** 2025-05-31 16:38:44.352099 | orchestrator | 2025-05-31 16:38:44.352110 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-31 16:38:44.352120 | orchestrator | Saturday 31 May 2025 16:37:48 +0000 (0:00:00.050) 0:00:54.141 ********** 2025-05-31 16:38:44.352131 | orchestrator | 2025-05-31 16:38:44.352142 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-05-31 16:38:44.352157 | orchestrator | Saturday 31 May 2025 16:37:48 +0000 (0:00:00.177) 0:00:54.318 ********** 2025-05-31 16:38:44.352167 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:38:44.352178 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:38:44.352189 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:38:44.352206 | orchestrator | 2025-05-31 16:38:44.352217 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-05-31 16:38:44.352228 | orchestrator | Saturday 31 May 2025 16:37:50 +0000 (0:00:01.757) 0:00:56.075 ********** 2025-05-31 16:38:44.352238 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:38:44.352249 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:38:44.352260 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-05-31 16:38:44.352271 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-05-31 16:38:44.352282 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:38:44.352293 | orchestrator | 2025-05-31 16:38:44.352304 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-05-31 16:38:44.352314 | orchestrator | Saturday 31 May 2025 16:38:17 +0000 (0:00:26.929) 0:01:23.005 ********** 2025-05-31 16:38:44.352325 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:38:44.352335 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:38:44.352346 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:38:44.352357 | orchestrator | 2025-05-31 16:38:44.352368 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-05-31 16:38:44.352378 | orchestrator | Saturday 31 May 2025 16:38:36 +0000 (0:00:19.152) 0:01:42.157 ********** 2025-05-31 16:38:44.352389 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:38:44.352399 | orchestrator | 2025-05-31 16:38:44.352410 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-05-31 16:38:44.352421 | orchestrator | Saturday 31 May 2025 16:38:38 +0000 (0:00:02.348) 0:01:44.505 ********** 2025-05-31 16:38:44.352431 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:38:44.352442 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:38:44.352453 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:38:44.352463 | orchestrator | 2025-05-31 16:38:44.352474 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-05-31 16:38:44.352484 | orchestrator | Saturday 31 May 2025 16:38:39 +0000 (0:00:00.389) 0:01:44.894 ********** 2025-05-31 16:38:44.352497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-05-31 16:38:44.352508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-05-31 16:38:44.352520 | orchestrator | 2025-05-31 16:38:44.352531 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-05-31 16:38:44.352541 | orchestrator | Saturday 31 May 2025 16:38:41 +0000 (0:00:02.689) 0:01:47.584 ********** 2025-05-31 16:38:44.352552 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:38:44.352563 | orchestrator | 2025-05-31 16:38:44.352574 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:38:44.352585 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-31 16:38:44.352596 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-31 16:38:44.352607 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-31 16:38:44.352618 | orchestrator | 2025-05-31 16:38:44.352629 | orchestrator | 2025-05-31 16:38:44.352639 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 16:38:44.352656 | orchestrator | Saturday 31 May 2025 16:38:42 +0000 (0:00:00.336) 0:01:47.921 ********** 2025-05-31 16:38:44.352666 | orchestrator | =============================================================================== 2025-05-31 16:38:44.352677 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 33.69s 2025-05-31 16:38:44.352742 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 26.93s 2025-05-31 16:38:44.352757 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 19.15s 2025-05-31 16:38:44.352767 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.69s 2025-05-31 16:38:44.352778 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.67s 2025-05-31 16:38:44.352789 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.36s 2025-05-31 16:38:44.352800 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.35s 2025-05-31 16:38:44.352810 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.76s 2025-05-31 16:38:44.352821 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.57s 2025-05-31 16:38:44.352832 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.47s 2025-05-31 16:38:44.352847 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.46s 2025-05-31 16:38:44.352858 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.41s 2025-05-31 16:38:44.352869 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.37s 2025-05-31 16:38:44.352880 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.08s 2025-05-31 16:38:44.352891 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.85s 2025-05-31 16:38:44.352902 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.82s 2025-05-31 16:38:44.352912 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 0.72s 2025-05-31 16:38:44.352923 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.70s 2025-05-31 16:38:44.352934 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.65s 2025-05-31 16:38:44.352945 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.57s 2025-05-31 16:38:47.391217 | orchestrator | 2025-05-31 16:38:47 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:38:47.393352 | orchestrator | 2025-05-31 16:38:47 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:38:47.394785 | orchestrator | 2025-05-31 16:38:47 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:38:47.394820 | orchestrator | 2025-05-31 16:38:47 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:38:50.439857 | orchestrator | 2025-05-31 16:38:50 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:38:50.442795 | orchestrator | 2025-05-31 16:38:50 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:38:50.444609 | orchestrator | 2025-05-31 16:38:50 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:38:50.444628 | orchestrator | 2025-05-31 16:38:50 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:38:53.508457 | orchestrator | 2025-05-31 16:38:53 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:38:53.512113 | orchestrator | 2025-05-31 16:38:53 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:38:53.515296 | orchestrator | 2025-05-31 16:38:53 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:38:53.515314 | orchestrator | 2025-05-31 16:38:53 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:38:56.565679 | orchestrator | 2025-05-31 16:38:56 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:38:56.567458 | orchestrator | 2025-05-31 16:38:56 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:38:56.569657 | orchestrator | 2025-05-31 16:38:56 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:38:56.569722 | orchestrator | 2025-05-31 16:38:56 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:38:59.621272 | orchestrator | 2025-05-31 16:38:59 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:38:59.621497 | orchestrator | 2025-05-31 16:38:59 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:38:59.624170 | orchestrator | 2025-05-31 16:38:59 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state STARTED 2025-05-31 16:38:59.624204 | orchestrator | 2025-05-31 16:38:59 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:39:02.679195 | orchestrator | 2025-05-31 16:39:02 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:39:02.681313 | orchestrator | 2025-05-31 16:39:02 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:39:02.682353 | orchestrator | 2025-05-31 16:39:02 | INFO  | Task 39eaf8df-85eb-4da9-bf59-91eb80c7cf4f is in state SUCCESS 2025-05-31 16:39:02.682516 | orchestrator | 2025-05-31 16:39:02 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:39:05.736909 | orchestrator | 2025-05-31 16:39:05 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:39:05.737541 | orchestrator | 2025-05-31 16:39:05 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:39:05.737569 | orchestrator | 2025-05-31 16:39:05 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:39:08.787311 | orchestrator | 2025-05-31 16:39:08 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:39:08.788698 | orchestrator | 2025-05-31 16:39:08 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:39:08.788773 | orchestrator | 2025-05-31 16:39:08 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:39:11.827204 | orchestrator | 2025-05-31 16:39:11 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:39:11.827620 | orchestrator | 2025-05-31 16:39:11 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:39:11.829506 | orchestrator | 2025-05-31 16:39:11 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:39:14.866196 | orchestrator | 2025-05-31 16:39:14 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:39:14.868277 | orchestrator | 2025-05-31 16:39:14 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:39:14.868640 | orchestrator | 2025-05-31 16:39:14 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:39:17.913349 | orchestrator | 2025-05-31 16:39:17 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:39:17.916030 | orchestrator | 2025-05-31 16:39:17 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:39:17.916071 | orchestrator | 2025-05-31 16:39:17 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:39:20.952780 | orchestrator | 2025-05-31 16:39:20 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:39:20.953467 | orchestrator | 2025-05-31 16:39:20 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:39:20.953524 | orchestrator | 2025-05-31 16:39:20 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:39:24.007130 | orchestrator | 2025-05-31 16:39:24 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:39:24.007234 | orchestrator | 2025-05-31 16:39:24 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:39:24.007250 | orchestrator | 2025-05-31 16:39:24 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:39:27.052785 | orchestrator | 2025-05-31 16:39:27 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:39:27.056585 | orchestrator | 2025-05-31 16:39:27 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:39:27.056641 | orchestrator | 2025-05-31 16:39:27 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:39:30.101078 | orchestrator | 2025-05-31 16:39:30 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:39:30.101534 | orchestrator | 2025-05-31 16:39:30 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:39:30.101567 | orchestrator | 2025-05-31 16:39:30 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:39:33.153010 | orchestrator | 2025-05-31 16:39:33 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:39:33.153118 | orchestrator | 2025-05-31 16:39:33 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:39:33.153133 | orchestrator | 2025-05-31 16:39:33 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:39:36.199749 | orchestrator | 2025-05-31 16:39:36 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:39:36.202394 | orchestrator | 2025-05-31 16:39:36 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:39:36.202429 | orchestrator | 2025-05-31 16:39:36 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:39:39.252058 | orchestrator | 2025-05-31 16:39:39 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:39:39.252558 | orchestrator | 2025-05-31 16:39:39 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:39:39.252602 | orchestrator | 2025-05-31 16:39:39 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:39:42.310315 | orchestrator | 2025-05-31 16:39:42 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:39:42.310485 | orchestrator | 2025-05-31 16:39:42 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:39:42.311605 | orchestrator | 2025-05-31 16:39:42 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:39:45.365268 | orchestrator | 2025-05-31 16:39:45 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:39:45.365371 | orchestrator | 2025-05-31 16:39:45 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:39:45.365386 | orchestrator | 2025-05-31 16:39:45 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:39:48.416083 | orchestrator | 2025-05-31 16:39:48 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:39:48.416467 | orchestrator | 2025-05-31 16:39:48 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:39:48.416500 | orchestrator | 2025-05-31 16:39:48 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:39:51.455120 | orchestrator | 2025-05-31 16:39:51 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:39:51.455255 | orchestrator | 2025-05-31 16:39:51 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:39:51.455271 | orchestrator | 2025-05-31 16:39:51 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:39:54.496764 | orchestrator | 2025-05-31 16:39:54 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:39:54.497031 | orchestrator | 2025-05-31 16:39:54 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:39:54.497139 | orchestrator | 2025-05-31 16:39:54 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:39:57.544943 | orchestrator | 2025-05-31 16:39:57 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:39:57.548190 | orchestrator | 2025-05-31 16:39:57 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:39:57.548732 | orchestrator | 2025-05-31 16:39:57 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:40:00.588865 | orchestrator | 2025-05-31 16:40:00 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:40:00.589540 | orchestrator | 2025-05-31 16:40:00 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:40:00.589633 | orchestrator | 2025-05-31 16:40:00 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:40:03.638940 | orchestrator | 2025-05-31 16:40:03 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:40:03.640226 | orchestrator | 2025-05-31 16:40:03 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:40:03.640264 | orchestrator | 2025-05-31 16:40:03 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:40:06.685613 | orchestrator | 2025-05-31 16:40:06 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:40:06.688163 | orchestrator | 2025-05-31 16:40:06 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:40:06.688221 | orchestrator | 2025-05-31 16:40:06 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:40:09.737734 | orchestrator | 2025-05-31 16:40:09 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:40:09.739165 | orchestrator | 2025-05-31 16:40:09 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:40:09.739191 | orchestrator | 2025-05-31 16:40:09 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:40:12.767775 | orchestrator | 2025-05-31 16:40:12 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:40:12.768185 | orchestrator | 2025-05-31 16:40:12 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:40:12.768295 | orchestrator | 2025-05-31 16:40:12 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:40:15.799716 | orchestrator | 2025-05-31 16:40:15 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:40:15.801896 | orchestrator | 2025-05-31 16:40:15 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:40:15.801926 | orchestrator | 2025-05-31 16:40:15 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:40:18.836048 | orchestrator | 2025-05-31 16:40:18 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:40:18.836143 | orchestrator | 2025-05-31 16:40:18 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:40:18.836158 | orchestrator | 2025-05-31 16:40:18 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:40:21.883954 | orchestrator | 2025-05-31 16:40:21 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:40:21.884923 | orchestrator | 2025-05-31 16:40:21 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:40:21.885221 | orchestrator | 2025-05-31 16:40:21 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:40:24.937401 | orchestrator | 2025-05-31 16:40:24 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:40:24.939603 | orchestrator | 2025-05-31 16:40:24 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:40:24.939689 | orchestrator | 2025-05-31 16:40:24 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:40:27.977545 | orchestrator | 2025-05-31 16:40:27 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:40:27.978000 | orchestrator | 2025-05-31 16:40:27 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:40:27.978155 | orchestrator | 2025-05-31 16:40:27 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:40:31.021464 | orchestrator | 2025-05-31 16:40:31 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:40:31.023379 | orchestrator | 2025-05-31 16:40:31 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:40:31.023417 | orchestrator | 2025-05-31 16:40:31 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:40:34.076772 | orchestrator | 2025-05-31 16:40:34 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:40:34.076883 | orchestrator | 2025-05-31 16:40:34 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:40:34.076899 | orchestrator | 2025-05-31 16:40:34 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:40:37.134228 | orchestrator | 2025-05-31 16:40:37 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:40:37.136018 | orchestrator | 2025-05-31 16:40:37 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:40:37.136070 | orchestrator | 2025-05-31 16:40:37 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:40:40.183007 | orchestrator | 2025-05-31 16:40:40 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:40:40.184470 | orchestrator | 2025-05-31 16:40:40 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:40:40.184522 | orchestrator | 2025-05-31 16:40:40 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:40:43.230896 | orchestrator | 2025-05-31 16:40:43 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:40:43.232805 | orchestrator | 2025-05-31 16:40:43 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:40:43.232848 | orchestrator | 2025-05-31 16:40:43 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:40:46.279461 | orchestrator | 2025-05-31 16:40:46 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:40:46.282337 | orchestrator | 2025-05-31 16:40:46 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:40:46.282383 | orchestrator | 2025-05-31 16:40:46 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:40:49.337600 | orchestrator | 2025-05-31 16:40:49 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:40:49.339779 | orchestrator | 2025-05-31 16:40:49 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:40:49.339852 | orchestrator | 2025-05-31 16:40:49 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:40:52.388708 | orchestrator | 2025-05-31 16:40:52 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:40:52.389875 | orchestrator | 2025-05-31 16:40:52 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:40:52.389910 | orchestrator | 2025-05-31 16:40:52 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:40:55.435621 | orchestrator | 2025-05-31 16:40:55 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:40:55.436696 | orchestrator | 2025-05-31 16:40:55 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:40:55.436727 | orchestrator | 2025-05-31 16:40:55 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:40:58.498107 | orchestrator | 2025-05-31 16:40:58 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:40:58.498232 | orchestrator | 2025-05-31 16:40:58 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:40:58.499054 | orchestrator | 2025-05-31 16:40:58 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:41:01.549869 | orchestrator | 2025-05-31 16:41:01 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:41:01.552406 | orchestrator | 2025-05-31 16:41:01 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:41:01.552463 | orchestrator | 2025-05-31 16:41:01 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:41:04.595277 | orchestrator | 2025-05-31 16:41:04 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:41:04.597192 | orchestrator | 2025-05-31 16:41:04 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:41:04.597229 | orchestrator | 2025-05-31 16:41:04 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:41:07.648172 | orchestrator | 2025-05-31 16:41:07 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:41:07.649828 | orchestrator | 2025-05-31 16:41:07 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:41:07.649873 | orchestrator | 2025-05-31 16:41:07 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:41:10.691975 | orchestrator | 2025-05-31 16:41:10 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:41:10.693414 | orchestrator | 2025-05-31 16:41:10 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:41:10.693445 | orchestrator | 2025-05-31 16:41:10 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:41:13.743424 | orchestrator | 2025-05-31 16:41:13 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:41:13.745758 | orchestrator | 2025-05-31 16:41:13 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:41:13.745800 | orchestrator | 2025-05-31 16:41:13 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:41:16.792374 | orchestrator | 2025-05-31 16:41:16 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:41:16.792474 | orchestrator | 2025-05-31 16:41:16 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:41:16.792489 | orchestrator | 2025-05-31 16:41:16 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:41:19.833070 | orchestrator | 2025-05-31 16:41:19 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:41:19.833973 | orchestrator | 2025-05-31 16:41:19 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:41:19.834268 | orchestrator | 2025-05-31 16:41:19 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:41:22.877150 | orchestrator | 2025-05-31 16:41:22 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:41:22.877484 | orchestrator | 2025-05-31 16:41:22 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:41:22.877516 | orchestrator | 2025-05-31 16:41:22 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:41:25.924610 | orchestrator | 2025-05-31 16:41:25 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:41:25.926014 | orchestrator | 2025-05-31 16:41:25 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:41:25.926247 | orchestrator | 2025-05-31 16:41:25 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:41:28.979808 | orchestrator | 2025-05-31 16:41:28 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:41:28.980843 | orchestrator | 2025-05-31 16:41:28 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:41:28.980887 | orchestrator | 2025-05-31 16:41:28 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:41:32.035604 | orchestrator | 2025-05-31 16:41:32 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:41:32.037478 | orchestrator | 2025-05-31 16:41:32 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:41:32.037520 | orchestrator | 2025-05-31 16:41:32 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:41:35.086839 | orchestrator | 2025-05-31 16:41:35 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:41:35.088206 | orchestrator | 2025-05-31 16:41:35 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:41:35.088325 | orchestrator | 2025-05-31 16:41:35 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:41:38.138462 | orchestrator | 2025-05-31 16:41:38 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:41:38.139928 | orchestrator | 2025-05-31 16:41:38 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:41:38.139989 | orchestrator | 2025-05-31 16:41:38 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:41:41.200445 | orchestrator | 2025-05-31 16:41:41 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:41:41.200551 | orchestrator | 2025-05-31 16:41:41 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:41:41.200566 | orchestrator | 2025-05-31 16:41:41 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:41:44.248410 | orchestrator | 2025-05-31 16:41:44 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:41:44.249774 | orchestrator | 2025-05-31 16:41:44 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:41:44.249807 | orchestrator | 2025-05-31 16:41:44 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:41:47.290148 | orchestrator | 2025-05-31 16:41:47 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:41:47.290692 | orchestrator | 2025-05-31 16:41:47 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:41:47.290724 | orchestrator | 2025-05-31 16:41:47 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:41:50.336785 | orchestrator | 2025-05-31 16:41:50 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:41:50.337106 | orchestrator | 2025-05-31 16:41:50 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:41:50.337134 | orchestrator | 2025-05-31 16:41:50 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:41:53.384749 | orchestrator | 2025-05-31 16:41:53 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:41:53.386010 | orchestrator | 2025-05-31 16:41:53 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:41:53.386171 | orchestrator | 2025-05-31 16:41:53 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:41:56.432715 | orchestrator | 2025-05-31 16:41:56 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:41:56.436748 | orchestrator | 2025-05-31 16:41:56 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:41:56.436794 | orchestrator | 2025-05-31 16:41:56 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:41:59.482721 | orchestrator | 2025-05-31 16:41:59 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:41:59.483306 | orchestrator | 2025-05-31 16:41:59 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:41:59.483404 | orchestrator | 2025-05-31 16:41:59 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:42:02.532814 | orchestrator | 2025-05-31 16:42:02 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:42:02.534201 | orchestrator | 2025-05-31 16:42:02 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:42:02.534248 | orchestrator | 2025-05-31 16:42:02 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:42:05.579396 | orchestrator | 2025-05-31 16:42:05 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:42:05.579504 | orchestrator | 2025-05-31 16:42:05 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:42:05.579518 | orchestrator | 2025-05-31 16:42:05 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:42:08.626990 | orchestrator | 2025-05-31 16:42:08 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:42:08.628120 | orchestrator | 2025-05-31 16:42:08 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:42:08.628150 | orchestrator | 2025-05-31 16:42:08 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:42:11.674196 | orchestrator | 2025-05-31 16:42:11 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:42:11.675549 | orchestrator | 2025-05-31 16:42:11 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:42:11.675592 | orchestrator | 2025-05-31 16:42:11 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:42:14.724491 | orchestrator | 2025-05-31 16:42:14 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:42:14.725850 | orchestrator | 2025-05-31 16:42:14 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:42:14.725901 | orchestrator | 2025-05-31 16:42:14 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:42:17.778000 | orchestrator | 2025-05-31 16:42:17 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:42:17.778535 | orchestrator | 2025-05-31 16:42:17 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:42:17.778590 | orchestrator | 2025-05-31 16:42:17 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:42:20.821541 | orchestrator | 2025-05-31 16:42:20 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:42:20.823382 | orchestrator | 2025-05-31 16:42:20 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:42:20.823421 | orchestrator | 2025-05-31 16:42:20 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:42:23.872766 | orchestrator | 2025-05-31 16:42:23 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:42:23.876753 | orchestrator | 2025-05-31 16:42:23 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:42:23.876812 | orchestrator | 2025-05-31 16:42:23 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:42:26.917092 | orchestrator | 2025-05-31 16:42:26 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:42:26.917814 | orchestrator | 2025-05-31 16:42:26 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:42:26.917848 | orchestrator | 2025-05-31 16:42:26 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:42:29.951360 | orchestrator | 2025-05-31 16:42:29 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:42:29.952033 | orchestrator | 2025-05-31 16:42:29 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:42:29.952070 | orchestrator | 2025-05-31 16:42:29 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:42:32.984895 | orchestrator | 2025-05-31 16:42:32 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:42:32.986270 | orchestrator | 2025-05-31 16:42:32 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:42:32.986314 | orchestrator | 2025-05-31 16:42:32 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:42:36.020597 | orchestrator | 2025-05-31 16:42:36 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:42:36.020729 | orchestrator | 2025-05-31 16:42:36 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:42:36.020745 | orchestrator | 2025-05-31 16:42:36 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:42:39.070330 | orchestrator | 2025-05-31 16:42:39 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:42:39.071874 | orchestrator | 2025-05-31 16:42:39 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:42:39.071950 | orchestrator | 2025-05-31 16:42:39 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:42:42.117083 | orchestrator | 2025-05-31 16:42:42 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:42:42.117961 | orchestrator | 2025-05-31 16:42:42 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:42:42.117992 | orchestrator | 2025-05-31 16:42:42 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:42:45.169318 | orchestrator | 2025-05-31 16:42:45 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:42:45.170902 | orchestrator | 2025-05-31 16:42:45 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:42:45.170934 | orchestrator | 2025-05-31 16:42:45 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:42:48.213968 | orchestrator | 2025-05-31 16:42:48 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:42:48.214506 | orchestrator | 2025-05-31 16:42:48 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:42:48.214795 | orchestrator | 2025-05-31 16:42:48 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:42:51.269144 | orchestrator | 2025-05-31 16:42:51 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:42:51.270232 | orchestrator | 2025-05-31 16:42:51 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:42:51.270406 | orchestrator | 2025-05-31 16:42:51 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:42:54.315777 | orchestrator | 2025-05-31 16:42:54 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:42:54.317140 | orchestrator | 2025-05-31 16:42:54 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:42:54.318387 | orchestrator | 2025-05-31 16:42:54 | INFO  | Task 69629780-4808-49b8-a771-112a8cf42644 is in state STARTED 2025-05-31 16:42:54.318469 | orchestrator | 2025-05-31 16:42:54 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:42:57.370787 | orchestrator | 2025-05-31 16:42:57 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:42:57.374136 | orchestrator | 2025-05-31 16:42:57 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:42:57.375307 | orchestrator | 2025-05-31 16:42:57 | INFO  | Task 69629780-4808-49b8-a771-112a8cf42644 is in state STARTED 2025-05-31 16:42:57.375553 | orchestrator | 2025-05-31 16:42:57 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:43:00.432919 | orchestrator | 2025-05-31 16:43:00 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:43:00.433323 | orchestrator | 2025-05-31 16:43:00 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:43:00.434637 | orchestrator | 2025-05-31 16:43:00 | INFO  | Task 69629780-4808-49b8-a771-112a8cf42644 is in state STARTED 2025-05-31 16:43:00.434762 | orchestrator | 2025-05-31 16:43:00 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:43:03.494921 | orchestrator | 2025-05-31 16:43:03 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:43:03.495144 | orchestrator | 2025-05-31 16:43:03 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:43:03.496835 | orchestrator | 2025-05-31 16:43:03 | INFO  | Task 69629780-4808-49b8-a771-112a8cf42644 is in state STARTED 2025-05-31 16:43:03.497246 | orchestrator | 2025-05-31 16:43:03 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:43:06.546912 | orchestrator | 2025-05-31 16:43:06 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:43:06.548153 | orchestrator | 2025-05-31 16:43:06 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state STARTED 2025-05-31 16:43:06.549652 | orchestrator | 2025-05-31 16:43:06 | INFO  | Task 69629780-4808-49b8-a771-112a8cf42644 is in state SUCCESS 2025-05-31 16:43:06.549983 | orchestrator | 2025-05-31 16:43:06 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:43:09.599328 | orchestrator | 2025-05-31 16:43:09 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:43:09.603579 | orchestrator | 2025-05-31 16:43:09 | INFO  | Task c1e32895-cbdf-4314-96b1-9bb6ced44664 is in state SUCCESS 2025-05-31 16:43:09.605985 | orchestrator | 2025-05-31 16:43:09.606154 | orchestrator | 2025-05-31 16:43:09.606174 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-31 16:43:09.606228 | orchestrator | 2025-05-31 16:43:09.606269 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-31 16:43:09.606281 | orchestrator | Saturday 31 May 2025 16:36:00 +0000 (0:00:00.145) 0:00:00.145 ********** 2025-05-31 16:43:09.606372 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:43:09.606387 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:43:09.606398 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:43:09.606409 | orchestrator | 2025-05-31 16:43:09.606420 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-31 16:43:09.606430 | orchestrator | Saturday 31 May 2025 16:36:01 +0000 (0:00:00.281) 0:00:00.426 ********** 2025-05-31 16:43:09.606441 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-05-31 16:43:09.606452 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-05-31 16:43:09.606463 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-05-31 16:43:09.606474 | orchestrator | 2025-05-31 16:43:09.606484 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-05-31 16:43:09.606495 | orchestrator | 2025-05-31 16:43:09.606506 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-05-31 16:43:09.606517 | orchestrator | Saturday 31 May 2025 16:36:01 +0000 (0:00:00.412) 0:00:00.839 ********** 2025-05-31 16:43:09.606527 | orchestrator | 2025-05-31 16:43:09.606538 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2025-05-31 16:43:09.606548 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:43:09.606559 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:43:09.606587 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:43:09.606609 | orchestrator | 2025-05-31 16:43:09.606656 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:43:09.606670 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:43:09.606684 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:43:09.606711 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:43:09.606724 | orchestrator | 2025-05-31 16:43:09.606736 | orchestrator | 2025-05-31 16:43:09.606749 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 16:43:09.606783 | orchestrator | Saturday 31 May 2025 16:38:59 +0000 (0:02:57.826) 0:02:58.665 ********** 2025-05-31 16:43:09.606794 | orchestrator | =============================================================================== 2025-05-31 16:43:09.606805 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 177.83s 2025-05-31 16:43:09.606816 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.41s 2025-05-31 16:43:09.606826 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.28s 2025-05-31 16:43:09.606837 | orchestrator | 2025-05-31 16:43:09.606849 | orchestrator | None 2025-05-31 16:43:09.606860 | orchestrator | 2025-05-31 16:43:09.606871 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-31 16:43:09.606881 | orchestrator | 2025-05-31 16:43:09.606892 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-05-31 16:43:09.606903 | orchestrator | Saturday 31 May 2025 16:34:51 +0000 (0:00:00.194) 0:00:00.194 ********** 2025-05-31 16:43:09.606913 | orchestrator | changed: [testbed-manager] 2025-05-31 16:43:09.606925 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:43:09.606936 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:43:09.606946 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:43:09.606957 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:43:09.606968 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:43:09.606978 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:43:09.606989 | orchestrator | 2025-05-31 16:43:09.606999 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-31 16:43:09.607010 | orchestrator | Saturday 31 May 2025 16:34:52 +0000 (0:00:00.935) 0:00:01.129 ********** 2025-05-31 16:43:09.607031 | orchestrator | changed: [testbed-manager] 2025-05-31 16:43:09.607041 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:43:09.607052 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:43:09.607063 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:43:09.607073 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:43:09.607084 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:43:09.607095 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:43:09.607105 | orchestrator | 2025-05-31 16:43:09.607116 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-31 16:43:09.607127 | orchestrator | Saturday 31 May 2025 16:34:54 +0000 (0:00:01.790) 0:00:02.920 ********** 2025-05-31 16:43:09.607137 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-05-31 16:43:09.607148 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-05-31 16:43:09.607159 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-05-31 16:43:09.607169 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-05-31 16:43:09.607180 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-05-31 16:43:09.607191 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-05-31 16:43:09.607201 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-05-31 16:43:09.607212 | orchestrator | 2025-05-31 16:43:09.607223 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-05-31 16:43:09.607233 | orchestrator | 2025-05-31 16:43:09.607244 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-05-31 16:43:09.607254 | orchestrator | Saturday 31 May 2025 16:34:56 +0000 (0:00:02.329) 0:00:05.253 ********** 2025-05-31 16:43:09.607265 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:43:09.607276 | orchestrator | 2025-05-31 16:43:09.607286 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-05-31 16:43:09.607313 | orchestrator | Saturday 31 May 2025 16:34:59 +0000 (0:00:02.271) 0:00:07.524 ********** 2025-05-31 16:43:09.607324 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-05-31 16:43:09.607336 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-05-31 16:43:09.607347 | orchestrator | 2025-05-31 16:43:09.607357 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-05-31 16:43:09.607368 | orchestrator | Saturday 31 May 2025 16:35:04 +0000 (0:00:05.019) 0:00:12.543 ********** 2025-05-31 16:43:09.607379 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-31 16:43:09.607390 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-31 16:43:09.607400 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:43:09.607411 | orchestrator | 2025-05-31 16:43:09.607422 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-05-31 16:43:09.607433 | orchestrator | Saturday 31 May 2025 16:35:09 +0000 (0:00:04.980) 0:00:17.524 ********** 2025-05-31 16:43:09.607443 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:43:09.607454 | orchestrator | 2025-05-31 16:43:09.607465 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-05-31 16:43:09.607480 | orchestrator | Saturday 31 May 2025 16:35:09 +0000 (0:00:00.711) 0:00:18.235 ********** 2025-05-31 16:43:09.607500 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:43:09.607671 | orchestrator | 2025-05-31 16:43:09.607711 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-05-31 16:43:09.607725 | orchestrator | Saturday 31 May 2025 16:35:11 +0000 (0:00:01.866) 0:00:20.102 ********** 2025-05-31 16:43:09.607736 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:43:09.607747 | orchestrator | 2025-05-31 16:43:09.607758 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-31 16:43:09.607768 | orchestrator | Saturday 31 May 2025 16:35:16 +0000 (0:00:04.482) 0:00:24.584 ********** 2025-05-31 16:43:09.607779 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:43:09.607799 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.607810 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.607821 | orchestrator | 2025-05-31 16:43:09.607832 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-05-31 16:43:09.607852 | orchestrator | Saturday 31 May 2025 16:35:16 +0000 (0:00:00.540) 0:00:25.125 ********** 2025-05-31 16:43:09.607869 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:43:09.607885 | orchestrator | 2025-05-31 16:43:09.607911 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-05-31 16:43:09.607930 | orchestrator | Saturday 31 May 2025 16:35:48 +0000 (0:00:31.891) 0:00:57.016 ********** 2025-05-31 16:43:09.607949 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:43:09.607961 | orchestrator | 2025-05-31 16:43:09.607972 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-31 16:43:09.607982 | orchestrator | Saturday 31 May 2025 16:36:02 +0000 (0:00:14.014) 0:01:11.031 ********** 2025-05-31 16:43:09.607993 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:43:09.608003 | orchestrator | 2025-05-31 16:43:09.608014 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-31 16:43:09.608024 | orchestrator | Saturday 31 May 2025 16:36:13 +0000 (0:00:11.058) 0:01:22.090 ********** 2025-05-31 16:43:09.608035 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:43:09.608046 | orchestrator | 2025-05-31 16:43:09.608056 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-05-31 16:43:09.608067 | orchestrator | Saturday 31 May 2025 16:36:15 +0000 (0:00:01.339) 0:01:23.429 ********** 2025-05-31 16:43:09.608078 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:43:09.608088 | orchestrator | 2025-05-31 16:43:09.608099 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-31 16:43:09.608109 | orchestrator | Saturday 31 May 2025 16:36:15 +0000 (0:00:00.742) 0:01:24.172 ********** 2025-05-31 16:43:09.608120 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:43:09.608130 | orchestrator | 2025-05-31 16:43:09.608141 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-05-31 16:43:09.608154 | orchestrator | Saturday 31 May 2025 16:36:16 +0000 (0:00:00.783) 0:01:24.955 ********** 2025-05-31 16:43:09.608173 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:43:09.608193 | orchestrator | 2025-05-31 16:43:09.608212 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-05-31 16:43:09.608231 | orchestrator | Saturday 31 May 2025 16:36:33 +0000 (0:00:16.960) 0:01:41.916 ********** 2025-05-31 16:43:09.608245 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:43:09.608256 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.608267 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.608277 | orchestrator | 2025-05-31 16:43:09.608288 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-05-31 16:43:09.608298 | orchestrator | 2025-05-31 16:43:09.608309 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-05-31 16:43:09.608320 | orchestrator | Saturday 31 May 2025 16:36:33 +0000 (0:00:00.275) 0:01:42.191 ********** 2025-05-31 16:43:09.608330 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:43:09.608341 | orchestrator | 2025-05-31 16:43:09.608351 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-05-31 16:43:09.608362 | orchestrator | Saturday 31 May 2025 16:36:34 +0000 (0:00:00.704) 0:01:42.896 ********** 2025-05-31 16:43:09.608372 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.608383 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.608394 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:43:09.608404 | orchestrator | 2025-05-31 16:43:09.608415 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-05-31 16:43:09.608425 | orchestrator | Saturday 31 May 2025 16:36:36 +0000 (0:00:02.310) 0:01:45.206 ********** 2025-05-31 16:43:09.608436 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.608459 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.608478 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:43:09.608494 | orchestrator | 2025-05-31 16:43:09.608506 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-05-31 16:43:09.608532 | orchestrator | Saturday 31 May 2025 16:36:39 +0000 (0:00:02.271) 0:01:47.477 ********** 2025-05-31 16:43:09.608543 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:43:09.608554 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.608565 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.608575 | orchestrator | 2025-05-31 16:43:09.608586 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-05-31 16:43:09.608597 | orchestrator | Saturday 31 May 2025 16:36:39 +0000 (0:00:00.363) 0:01:47.841 ********** 2025-05-31 16:43:09.608607 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-31 16:43:09.608701 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.608715 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-31 16:43:09.608726 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.608736 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-05-31 16:43:09.608747 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-05-31 16:43:09.608758 | orchestrator | 2025-05-31 16:43:09.608769 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-05-31 16:43:09.608780 | orchestrator | Saturday 31 May 2025 16:36:48 +0000 (0:00:08.880) 0:01:56.721 ********** 2025-05-31 16:43:09.608790 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:43:09.608801 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.608811 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.608822 | orchestrator | 2025-05-31 16:43:09.608833 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-05-31 16:43:09.608843 | orchestrator | Saturday 31 May 2025 16:36:48 +0000 (0:00:00.515) 0:01:57.236 ********** 2025-05-31 16:43:09.608854 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-31 16:43:09.608865 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:43:09.608875 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-31 16:43:09.608886 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.608897 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-31 16:43:09.608907 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.608918 | orchestrator | 2025-05-31 16:43:09.608928 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-05-31 16:43:09.608939 | orchestrator | Saturday 31 May 2025 16:36:49 +0000 (0:00:00.944) 0:01:58.180 ********** 2025-05-31 16:43:09.608949 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.608960 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.608977 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:43:09.608988 | orchestrator | 2025-05-31 16:43:09.608998 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-05-31 16:43:09.609009 | orchestrator | Saturday 31 May 2025 16:36:50 +0000 (0:00:00.539) 0:01:58.719 ********** 2025-05-31 16:43:09.609020 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.609030 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.609041 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:43:09.609052 | orchestrator | 2025-05-31 16:43:09.609062 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-05-31 16:43:09.609072 | orchestrator | Saturday 31 May 2025 16:36:51 +0000 (0:00:00.971) 0:01:59.691 ********** 2025-05-31 16:43:09.609081 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.609091 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.609100 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:43:09.609110 | orchestrator | 2025-05-31 16:43:09.609119 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-05-31 16:43:09.609129 | orchestrator | Saturday 31 May 2025 16:36:53 +0000 (0:00:02.314) 0:02:02.005 ********** 2025-05-31 16:43:09.609146 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.609156 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.609166 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:43:09.609175 | orchestrator | 2025-05-31 16:43:09.609185 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-31 16:43:09.609194 | orchestrator | Saturday 31 May 2025 16:37:13 +0000 (0:00:19.622) 0:02:21.628 ********** 2025-05-31 16:43:09.609204 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.609213 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.609222 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:43:09.609232 | orchestrator | 2025-05-31 16:43:09.609241 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-31 16:43:09.609251 | orchestrator | Saturday 31 May 2025 16:37:24 +0000 (0:00:11.289) 0:02:32.917 ********** 2025-05-31 16:43:09.609260 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:43:09.609270 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.609279 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.609288 | orchestrator | 2025-05-31 16:43:09.609298 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-05-31 16:43:09.609307 | orchestrator | Saturday 31 May 2025 16:37:25 +0000 (0:00:01.089) 0:02:34.006 ********** 2025-05-31 16:43:09.609317 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.609326 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.609335 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:43:09.609345 | orchestrator | 2025-05-31 16:43:09.609354 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-05-31 16:43:09.609364 | orchestrator | Saturday 31 May 2025 16:37:36 +0000 (0:00:11.157) 0:02:45.163 ********** 2025-05-31 16:43:09.609373 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:43:09.609382 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.609392 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.609401 | orchestrator | 2025-05-31 16:43:09.609411 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-05-31 16:43:09.609420 | orchestrator | Saturday 31 May 2025 16:37:38 +0000 (0:00:01.402) 0:02:46.566 ********** 2025-05-31 16:43:09.609429 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:43:09.609439 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.609448 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.609458 | orchestrator | 2025-05-31 16:43:09.609467 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-05-31 16:43:09.609477 | orchestrator | 2025-05-31 16:43:09.609486 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-31 16:43:09.609496 | orchestrator | Saturday 31 May 2025 16:37:38 +0000 (0:00:00.443) 0:02:47.009 ********** 2025-05-31 16:43:09.609512 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:43:09.609522 | orchestrator | 2025-05-31 16:43:09.609532 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-05-31 16:43:09.609541 | orchestrator | Saturday 31 May 2025 16:37:39 +0000 (0:00:00.607) 0:02:47.617 ********** 2025-05-31 16:43:09.609551 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-05-31 16:43:09.609560 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-05-31 16:43:09.609570 | orchestrator | 2025-05-31 16:43:09.609580 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-05-31 16:43:09.609589 | orchestrator | Saturday 31 May 2025 16:37:42 +0000 (0:00:03.593) 0:02:51.210 ********** 2025-05-31 16:43:09.609598 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-05-31 16:43:09.609608 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-05-31 16:43:09.609637 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-05-31 16:43:09.609654 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-05-31 16:43:09.609663 | orchestrator | 2025-05-31 16:43:09.609673 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-05-31 16:43:09.609682 | orchestrator | Saturday 31 May 2025 16:37:49 +0000 (0:00:06.626) 0:02:57.836 ********** 2025-05-31 16:43:09.609692 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-31 16:43:09.609701 | orchestrator | 2025-05-31 16:43:09.609710 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-05-31 16:43:09.609720 | orchestrator | Saturday 31 May 2025 16:37:52 +0000 (0:00:03.279) 0:03:01.116 ********** 2025-05-31 16:43:09.609729 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-31 16:43:09.609738 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-05-31 16:43:09.609748 | orchestrator | 2025-05-31 16:43:09.609757 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-05-31 16:43:09.609771 | orchestrator | Saturday 31 May 2025 16:37:56 +0000 (0:00:03.962) 0:03:05.078 ********** 2025-05-31 16:43:09.609781 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-31 16:43:09.609790 | orchestrator | 2025-05-31 16:43:09.609799 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-05-31 16:43:09.609809 | orchestrator | Saturday 31 May 2025 16:38:00 +0000 (0:00:03.825) 0:03:08.903 ********** 2025-05-31 16:43:09.609818 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-05-31 16:43:09.609827 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-05-31 16:43:09.609837 | orchestrator | 2025-05-31 16:43:09.609846 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-05-31 16:43:09.609855 | orchestrator | Saturday 31 May 2025 16:38:08 +0000 (0:00:08.422) 0:03:17.326 ********** 2025-05-31 16:43:09.609871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-31 16:43:09.609894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-31 16:43:09.609917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-31 16:43:09.609929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-31 16:43:09.609940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.609951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-31 16:43:09.609967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.609984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-31 16:43:09.609994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.610004 | orchestrator | 2025-05-31 16:43:09.610014 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-05-31 16:43:09.610088 | orchestrator | Saturday 31 May 2025 16:38:10 +0000 (0:00:01.424) 0:03:18.751 ********** 2025-05-31 16:43:09.610099 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:43:09.610109 | orchestrator | 2025-05-31 16:43:09.610123 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-05-31 16:43:09.610133 | orchestrator | Saturday 31 May 2025 16:38:10 +0000 (0:00:00.240) 0:03:18.991 ********** 2025-05-31 16:43:09.610142 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:43:09.610152 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.610161 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.610171 | orchestrator | 2025-05-31 16:43:09.610180 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-05-31 16:43:09.610190 | orchestrator | Saturday 31 May 2025 16:38:10 +0000 (0:00:00.263) 0:03:19.255 ********** 2025-05-31 16:43:09.610199 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-31 16:43:09.610208 | orchestrator | 2025-05-31 16:43:09.610218 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-05-31 16:43:09.610227 | orchestrator | Saturday 31 May 2025 16:38:11 +0000 (0:00:00.517) 0:03:19.773 ********** 2025-05-31 16:43:09.610236 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:43:09.610246 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.610255 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.610265 | orchestrator | 2025-05-31 16:43:09.610274 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-31 16:43:09.610284 | orchestrator | Saturday 31 May 2025 16:38:11 +0000 (0:00:00.272) 0:03:20.045 ********** 2025-05-31 16:43:09.610293 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:43:09.610302 | orchestrator | 2025-05-31 16:43:09.610311 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-05-31 16:43:09.610321 | orchestrator | Saturday 31 May 2025 16:38:12 +0000 (0:00:00.749) 0:03:20.795 ********** 2025-05-31 16:43:09.610335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-31 16:43:09.610377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-31 16:43:09.610404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-31 16:43:09.610417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-31 16:43:09.610427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-31 16:43:09.610454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-31 16:43:09.610464 | orchestrator | 2025-05-31 16:43:09.610474 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-05-31 16:43:09.610483 | orchestrator | Saturday 31 May 2025 16:38:14 +0000 (0:00:02.540) 0:03:23.335 ********** 2025-05-31 16:43:09.610493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-31 16:43:09.610509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.610519 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:43:09.610529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-31 16:43:09.610546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.610556 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.611004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-31 16:43:09.611129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.611160 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.611182 | orchestrator | 2025-05-31 16:43:09.611203 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-05-31 16:43:09.611223 | orchestrator | Saturday 31 May 2025 16:38:15 +0000 (0:00:00.630) 0:03:23.966 ********** 2025-05-31 16:43:09.611243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-31 16:43:09.611289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.611303 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:43:09.611336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-31 16:43:09.611350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.611369 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.611381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-31 16:43:09.611401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.611412 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.611423 | orchestrator | 2025-05-31 16:43:09.611434 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-05-31 16:43:09.611445 | orchestrator | Saturday 31 May 2025 16:38:16 +0000 (0:00:01.104) 0:03:25.070 ********** 2025-05-31 16:43:09.611467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-31 16:43:09.611485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-31 16:43:09.611499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-31 16:43:09.611519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-31 16:43:09.611541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.611559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-31 16:43:09.611588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.611609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-31 16:43:09.611671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.611691 | orchestrator | 2025-05-31 16:43:09.611710 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-05-31 16:43:09.611729 | orchestrator | Saturday 31 May 2025 16:38:19 +0000 (0:00:02.965) 0:03:28.036 ********** 2025-05-31 16:43:09.611764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-31 16:43:09.611785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-31 16:43:09.611806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-31 16:43:09.611828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-31 16:43:09.611842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.611865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-31 16:43:09.611878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.611891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-31 16:43:09.611907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.611925 | orchestrator | 2025-05-31 16:43:09.611936 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-05-31 16:43:09.611947 | orchestrator | Saturday 31 May 2025 16:38:26 +0000 (0:00:06.763) 0:03:34.799 ********** 2025-05-31 16:43:09.611959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-31 16:43:09.611989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.612001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.612012 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:43:09.612029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-31 16:43:09.612047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.612059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.612070 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.612089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-31 16:43:09.612102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.612124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.612142 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.612153 | orchestrator | 2025-05-31 16:43:09.612164 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-05-31 16:43:09.612175 | orchestrator | Saturday 31 May 2025 16:38:27 +0000 (0:00:00.786) 0:03:35.585 ********** 2025-05-31 16:43:09.612186 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:43:09.612197 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:43:09.612208 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:43:09.612219 | orchestrator | 2025-05-31 16:43:09.612230 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-05-31 16:43:09.612241 | orchestrator | Saturday 31 May 2025 16:38:28 +0000 (0:00:01.672) 0:03:37.258 ********** 2025-05-31 16:43:09.612252 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:43:09.612263 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.612274 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.612285 | orchestrator | 2025-05-31 16:43:09.612299 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-05-31 16:43:09.612318 | orchestrator | Saturday 31 May 2025 16:38:29 +0000 (0:00:00.449) 0:03:37.707 ********** 2025-05-31 16:43:09.612330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-31 16:43:09.612350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-31 16:43:09.612369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-31 16:43:09.612388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-31 16:43:09.612400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.612411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-31 16:43:09.612430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.612442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-31 16:43:09.612459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.612470 | orchestrator | 2025-05-31 16:43:09.612486 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-31 16:43:09.612497 | orchestrator | Saturday 31 May 2025 16:38:31 +0000 (0:00:02.026) 0:03:39.734 ********** 2025-05-31 16:43:09.612508 | orchestrator | 2025-05-31 16:43:09.612519 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-31 16:43:09.612530 | orchestrator | Saturday 31 May 2025 16:38:31 +0000 (0:00:00.240) 0:03:39.974 ********** 2025-05-31 16:43:09.612540 | orchestrator | 2025-05-31 16:43:09.612551 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-31 16:43:09.612562 | orchestrator | Saturday 31 May 2025 16:38:31 +0000 (0:00:00.103) 0:03:40.078 ********** 2025-05-31 16:43:09.612573 | orchestrator | 2025-05-31 16:43:09.612584 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-05-31 16:43:09.612594 | orchestrator | Saturday 31 May 2025 16:38:31 +0000 (0:00:00.233) 0:03:40.312 ********** 2025-05-31 16:43:09.612605 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:43:09.612655 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:43:09.612669 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:43:09.612680 | orchestrator | 2025-05-31 16:43:09.612691 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-05-31 16:43:09.612702 | orchestrator | Saturday 31 May 2025 16:38:50 +0000 (0:00:18.643) 0:03:58.956 ********** 2025-05-31 16:43:09.612713 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:43:09.612723 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:43:09.612734 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:43:09.612745 | orchestrator | 2025-05-31 16:43:09.612756 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-05-31 16:43:09.612767 | orchestrator | 2025-05-31 16:43:09.612777 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-31 16:43:09.612788 | orchestrator | Saturday 31 May 2025 16:38:59 +0000 (0:00:08.744) 0:04:07.700 ********** 2025-05-31 16:43:09.612800 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:43:09.612811 | orchestrator | 2025-05-31 16:43:09.612822 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-31 16:43:09.612833 | orchestrator | Saturday 31 May 2025 16:39:01 +0000 (0:00:01.649) 0:04:09.349 ********** 2025-05-31 16:43:09.612844 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:43:09.612855 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:43:09.612873 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:43:09.612893 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:43:09.612913 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.612924 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.612935 | orchestrator | 2025-05-31 16:43:09.612946 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-05-31 16:43:09.612956 | orchestrator | Saturday 31 May 2025 16:39:01 +0000 (0:00:00.708) 0:04:10.057 ********** 2025-05-31 16:43:09.612967 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:43:09.612986 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.612997 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.613007 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:43:09.613018 | orchestrator | 2025-05-31 16:43:09.613029 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-31 16:43:09.613040 | orchestrator | Saturday 31 May 2025 16:39:02 +0000 (0:00:00.999) 0:04:11.057 ********** 2025-05-31 16:43:09.613051 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-05-31 16:43:09.613062 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-05-31 16:43:09.613073 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-05-31 16:43:09.613084 | orchestrator | 2025-05-31 16:43:09.613102 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-31 16:43:09.613114 | orchestrator | Saturday 31 May 2025 16:39:03 +0000 (0:00:00.873) 0:04:11.931 ********** 2025-05-31 16:43:09.613124 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-05-31 16:43:09.613135 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-05-31 16:43:09.613146 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-05-31 16:43:09.613157 | orchestrator | 2025-05-31 16:43:09.613167 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-31 16:43:09.613178 | orchestrator | Saturday 31 May 2025 16:39:05 +0000 (0:00:01.486) 0:04:13.418 ********** 2025-05-31 16:43:09.613189 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-05-31 16:43:09.613200 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:43:09.613210 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-05-31 16:43:09.613221 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:43:09.613231 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-05-31 16:43:09.613242 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:43:09.613253 | orchestrator | 2025-05-31 16:43:09.613263 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-05-31 16:43:09.613274 | orchestrator | Saturday 31 May 2025 16:39:05 +0000 (0:00:00.588) 0:04:14.007 ********** 2025-05-31 16:43:09.613285 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-31 16:43:09.613296 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-31 16:43:09.613306 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:43:09.613317 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-31 16:43:09.613328 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-31 16:43:09.613338 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-31 16:43:09.613349 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-31 16:43:09.613360 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-31 16:43:09.613376 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.613387 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-31 16:43:09.613397 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-31 16:43:09.613408 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.613419 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-31 16:43:09.613429 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-31 16:43:09.613440 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-31 16:43:09.613450 | orchestrator | 2025-05-31 16:43:09.613461 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-05-31 16:43:09.613471 | orchestrator | Saturday 31 May 2025 16:39:07 +0000 (0:00:02.298) 0:04:16.305 ********** 2025-05-31 16:43:09.613488 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:43:09.613499 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.613510 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.613520 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:43:09.613531 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:43:09.613541 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:43:09.613552 | orchestrator | 2025-05-31 16:43:09.613563 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-05-31 16:43:09.613574 | orchestrator | Saturday 31 May 2025 16:39:09 +0000 (0:00:01.225) 0:04:17.531 ********** 2025-05-31 16:43:09.613584 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:43:09.613595 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.613606 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.613664 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:43:09.613677 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:43:09.613688 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:43:09.613699 | orchestrator | 2025-05-31 16:43:09.613710 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-05-31 16:43:09.613721 | orchestrator | Saturday 31 May 2025 16:39:11 +0000 (0:00:01.820) 0:04:19.352 ********** 2025-05-31 16:43:09.613733 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-31 16:43:09.613753 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-31 16:43:09.613766 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-31 16:43:09.613783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-31 16:43:09.613803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-31 16:43:09.613815 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-31 16:43:09.613833 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.613846 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.613859 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:43:09.613876 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.613894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-31 16:43:09.613906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-31 16:43:09.613917 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-31 16:43:09.613936 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.613948 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.613959 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:43:09.613982 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.613994 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-31 16:43:09.614005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-31 16:43:09.614051 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.614074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.614086 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.614109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:43:09.614126 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:43:09.614139 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.614151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-31 16:43:09.614162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-31 16:43:09.614181 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-31 16:43:09.614193 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.614217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-31 16:43:09.614229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.614241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:43:09.614252 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-31 16:43:09.614279 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.614299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-31 16:43:09.614335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.614354 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-31 16:43:09.614366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.614377 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.614396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-31 16:43:09.614408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.614426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:43:09.614442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-31 16:43:09.614454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.614465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.614483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-31 16:43:09.614495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.614512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.614524 | orchestrator | 2025-05-31 16:43:09.614535 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-31 16:43:09.614551 | orchestrator | Saturday 31 May 2025 16:39:13 +0000 (0:00:02.668) 0:04:22.020 ********** 2025-05-31 16:43:09.614563 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 16:43:09.614574 | orchestrator | 2025-05-31 16:43:09.614585 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-05-31 16:43:09.614596 | orchestrator | Saturday 31 May 2025 16:39:15 +0000 (0:00:01.467) 0:04:23.488 ********** 2025-05-31 16:43:09.614608 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-31 16:43:09.614645 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-31 16:43:09.614674 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-31 16:43:09.614694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-31 16:43:09.614710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-31 16:43:09.614722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-31 16:43:09.614734 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-31 16:43:09.614745 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-31 16:43:09.614763 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-31 16:43:09.614781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-31 16:43:09.614792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-31 16:43:09.614808 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-31 16:43:09.614820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-31 16:43:09.614831 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-31 16:43:09.614848 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-31 16:43:09.614867 | orchestrator | 2025-05-31 16:43:09.614878 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-05-31 16:43:09.614889 | orchestrator | Saturday 31 May 2025 16:39:19 +0000 (0:00:04.064) 0:04:27.552 ********** 2025-05-31 16:43:09.614901 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-31 16:43:09.614920 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-31 16:43:09.614932 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.614944 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:43:09.614956 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-31 16:43:09.614980 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-31 16:43:09.614992 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.615004 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:43:09.615021 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-31 16:43:09.615033 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-31 16:43:09.615044 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.615055 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:43:09.615067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.615092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.615104 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.615115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.615132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.615143 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:43:09.615154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.615166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.615177 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.615188 | orchestrator | 2025-05-31 16:43:09.615199 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-05-31 16:43:09.615210 | orchestrator | Saturday 31 May 2025 16:39:20 +0000 (0:00:01.608) 0:04:29.161 ********** 2025-05-31 16:43:09.615229 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-31 16:43:09.615246 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-31 16:43:09.615257 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.615269 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:43:09.615285 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-31 16:43:09.615297 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-31 16:43:09.615308 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.615325 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:43:09.615344 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-31 16:43:09.615356 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-31 16:43:09.615372 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.615384 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:43:09.615395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.615407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.615424 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:43:09.615435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.615453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.615465 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.615476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.615488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.615499 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.615510 | orchestrator | 2025-05-31 16:43:09.615526 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-31 16:43:09.615538 | orchestrator | Saturday 31 May 2025 16:39:23 +0000 (0:00:02.389) 0:04:31.550 ********** 2025-05-31 16:43:09.615549 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:43:09.615560 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.615570 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.615581 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 16:43:09.615592 | orchestrator | 2025-05-31 16:43:09.615603 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-05-31 16:43:09.615638 | orchestrator | Saturday 31 May 2025 16:39:24 +0000 (0:00:01.121) 0:04:32.672 ********** 2025-05-31 16:43:09.615657 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-31 16:43:09.615668 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-31 16:43:09.615679 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-31 16:43:09.615690 | orchestrator | 2025-05-31 16:43:09.615701 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-05-31 16:43:09.615712 | orchestrator | Saturday 31 May 2025 16:39:25 +0000 (0:00:00.794) 0:04:33.466 ********** 2025-05-31 16:43:09.615723 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-31 16:43:09.615734 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-31 16:43:09.615745 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-31 16:43:09.615756 | orchestrator | 2025-05-31 16:43:09.615767 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-05-31 16:43:09.615777 | orchestrator | Saturday 31 May 2025 16:39:25 +0000 (0:00:00.783) 0:04:34.250 ********** 2025-05-31 16:43:09.615788 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:43:09.615799 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:43:09.615810 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:43:09.615821 | orchestrator | 2025-05-31 16:43:09.615832 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-05-31 16:43:09.615843 | orchestrator | Saturday 31 May 2025 16:39:26 +0000 (0:00:00.668) 0:04:34.918 ********** 2025-05-31 16:43:09.615854 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:43:09.615864 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:43:09.615875 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:43:09.615886 | orchestrator | 2025-05-31 16:43:09.615897 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-05-31 16:43:09.615908 | orchestrator | Saturday 31 May 2025 16:39:27 +0000 (0:00:00.444) 0:04:35.363 ********** 2025-05-31 16:43:09.615919 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-31 16:43:09.615930 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-31 16:43:09.615940 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-31 16:43:09.615951 | orchestrator | 2025-05-31 16:43:09.615962 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-05-31 16:43:09.615973 | orchestrator | Saturday 31 May 2025 16:39:28 +0000 (0:00:01.344) 0:04:36.708 ********** 2025-05-31 16:43:09.615984 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-31 16:43:09.616004 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-31 16:43:09.616017 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-31 16:43:09.616028 | orchestrator | 2025-05-31 16:43:09.616039 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-05-31 16:43:09.616050 | orchestrator | Saturday 31 May 2025 16:39:29 +0000 (0:00:01.374) 0:04:38.082 ********** 2025-05-31 16:43:09.616061 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-31 16:43:09.616072 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-31 16:43:09.616083 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-31 16:43:09.616101 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-05-31 16:43:09.616112 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-05-31 16:43:09.616123 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-05-31 16:43:09.616134 | orchestrator | 2025-05-31 16:43:09.616144 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-05-31 16:43:09.616155 | orchestrator | Saturday 31 May 2025 16:39:34 +0000 (0:00:05.234) 0:04:43.317 ********** 2025-05-31 16:43:09.616166 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:43:09.616177 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:43:09.616188 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:43:09.616198 | orchestrator | 2025-05-31 16:43:09.616209 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-05-31 16:43:09.616220 | orchestrator | Saturday 31 May 2025 16:39:35 +0000 (0:00:00.459) 0:04:43.776 ********** 2025-05-31 16:43:09.616230 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:43:09.616247 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:43:09.616258 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:43:09.616269 | orchestrator | 2025-05-31 16:43:09.616280 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-05-31 16:43:09.616290 | orchestrator | Saturday 31 May 2025 16:39:35 +0000 (0:00:00.501) 0:04:44.277 ********** 2025-05-31 16:43:09.616301 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:43:09.616312 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:43:09.616322 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:43:09.616333 | orchestrator | 2025-05-31 16:43:09.616344 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-05-31 16:43:09.616354 | orchestrator | Saturday 31 May 2025 16:39:37 +0000 (0:00:01.313) 0:04:45.591 ********** 2025-05-31 16:43:09.616366 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-31 16:43:09.616376 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-31 16:43:09.616387 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-31 16:43:09.616403 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-31 16:43:09.616414 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-31 16:43:09.616425 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-31 16:43:09.616436 | orchestrator | 2025-05-31 16:43:09.616448 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-05-31 16:43:09.616458 | orchestrator | Saturday 31 May 2025 16:39:40 +0000 (0:00:03.382) 0:04:48.973 ********** 2025-05-31 16:43:09.616469 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-31 16:43:09.616480 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-31 16:43:09.616491 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-31 16:43:09.616502 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-31 16:43:09.616513 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-31 16:43:09.616524 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:43:09.616534 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:43:09.616545 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-31 16:43:09.616555 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:43:09.616566 | orchestrator | 2025-05-31 16:43:09.616577 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-05-31 16:43:09.616587 | orchestrator | Saturday 31 May 2025 16:39:44 +0000 (0:00:03.365) 0:04:52.339 ********** 2025-05-31 16:43:09.616598 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:43:09.616609 | orchestrator | 2025-05-31 16:43:09.616775 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-05-31 16:43:09.616796 | orchestrator | Saturday 31 May 2025 16:39:44 +0000 (0:00:00.129) 0:04:52.469 ********** 2025-05-31 16:43:09.616807 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:43:09.616819 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:43:09.616830 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:43:09.616840 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:43:09.616851 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.616862 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.616872 | orchestrator | 2025-05-31 16:43:09.616883 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-05-31 16:43:09.616894 | orchestrator | Saturday 31 May 2025 16:39:45 +0000 (0:00:01.071) 0:04:53.541 ********** 2025-05-31 16:43:09.616905 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-31 16:43:09.616928 | orchestrator | 2025-05-31 16:43:09.616939 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-05-31 16:43:09.616949 | orchestrator | Saturday 31 May 2025 16:39:45 +0000 (0:00:00.406) 0:04:53.948 ********** 2025-05-31 16:43:09.616959 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:43:09.616969 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:43:09.616978 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:43:09.616988 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:43:09.616997 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.617006 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.617016 | orchestrator | 2025-05-31 16:43:09.617026 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-05-31 16:43:09.617036 | orchestrator | Saturday 31 May 2025 16:39:46 +0000 (0:00:00.962) 0:04:54.910 ********** 2025-05-31 16:43:09.617060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-31 16:43:09.617072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-31 16:43:09.617092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-31 16:43:09.617103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-31 16:43:09.617114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-31 16:43:09.617136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-31 16:43:09.617147 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-31 16:43:09.617162 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-31 16:43:09.617173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-31 16:43:09.617183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.617200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:43:09.617216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-31 16:43:09.617227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.617238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:43:09.617254 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-31 16:43:09.617265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-31 16:43:09.617281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.617292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:43:09.617308 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-31 16:43:09.617319 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.617334 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.617345 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:43:09.617355 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.617371 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-31 16:43:09.617381 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.617397 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.617407 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:43:09.617418 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.617433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-31 16:43:09.617444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.617460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.617471 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-31 16:43:09.617487 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.617498 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.617512 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:43:09.617523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-31 16:43:09.617541 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.617552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.617567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '2025-05-31 16:43:09 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:43:09.617902 | orchestrator | kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.617925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-31 16:43:09.617943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.617955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.618007 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-31 16:43:09.618047 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.618072 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-31 16:43:09.618084 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.618100 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-31 16:43:09.618126 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.618136 | orchestrator | 2025-05-31 16:43:09.618146 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-05-31 16:43:09.618156 | orchestrator | Saturday 31 May 2025 16:39:50 +0000 (0:00:04.344) 0:04:59.254 ********** 2025-05-31 16:43:09.618167 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-31 16:43:09.618178 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-31 16:43:09.618194 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.618205 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.618226 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:43:09.618238 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.618248 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-31 16:43:09.618259 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-31 16:43:09.618276 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.618287 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.618297 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:43:09.618318 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.618329 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-31 16:43:09.618339 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-31 16:43:09.618350 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.618365 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.618377 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:43:09.618397 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.618420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-31 16:43:09.618431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-31 16:43:09.618442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-31 16:43:09.618458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-31 16:43:09.618470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-31 16:43:09.618491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-31 16:43:09.618501 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-31 16:43:09.618512 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.618527 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-31 16:43:09.618540 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.618558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-31 16:43:09.618574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.618586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:43:09.618598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-31 16:43:09.618610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.618645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:43:09.618658 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-31 16:43:09.618680 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.618693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-31 16:43:09.618705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.618717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:43:09.618728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-31 16:43:09.618747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.618765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.618782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-31 16:43:09.618795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.618807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.618818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-31 16:43:09.618836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.618859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.618869 | orchestrator | 2025-05-31 16:43:09.618879 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-05-31 16:43:09.618889 | orchestrator | Saturday 31 May 2025 16:39:57 +0000 (0:00:06.644) 0:05:05.899 ********** 2025-05-31 16:43:09.618899 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:43:09.618909 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:43:09.618919 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:43:09.618929 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:43:09.618938 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.618953 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.618962 | orchestrator | 2025-05-31 16:43:09.618972 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-05-31 16:43:09.618982 | orchestrator | Saturday 31 May 2025 16:39:58 +0000 (0:00:01.369) 0:05:07.268 ********** 2025-05-31 16:43:09.618992 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-31 16:43:09.619002 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-31 16:43:09.619012 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-31 16:43:09.619021 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-31 16:43:09.619032 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.619041 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-31 16:43:09.619051 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-31 16:43:09.619061 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:43:09.619071 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-31 16:43:09.619080 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-31 16:43:09.619090 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.619100 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-31 16:43:09.619109 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-31 16:43:09.619119 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-31 16:43:09.619129 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-31 16:43:09.619139 | orchestrator | 2025-05-31 16:43:09.619149 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-05-31 16:43:09.619159 | orchestrator | Saturday 31 May 2025 16:40:03 +0000 (0:00:05.028) 0:05:12.296 ********** 2025-05-31 16:43:09.619168 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:43:09.619178 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:43:09.619211 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:43:09.619222 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:43:09.619232 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.619241 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.619251 | orchestrator | 2025-05-31 16:43:09.619261 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-05-31 16:43:09.619271 | orchestrator | Saturday 31 May 2025 16:40:04 +0000 (0:00:00.852) 0:05:13.149 ********** 2025-05-31 16:43:09.619280 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-31 16:43:09.619290 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-31 16:43:09.619300 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-31 16:43:09.619310 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-31 16:43:09.619325 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-31 16:43:09.619336 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-31 16:43:09.619346 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-31 16:43:09.619356 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-31 16:43:09.619366 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-31 16:43:09.619376 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-31 16:43:09.619386 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:43:09.619396 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-31 16:43:09.619406 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.619416 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-31 16:43:09.619425 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.619435 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-31 16:43:09.619445 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-31 16:43:09.619455 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-31 16:43:09.619465 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-31 16:43:09.619480 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-31 16:43:09.619490 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-31 16:43:09.619500 | orchestrator | 2025-05-31 16:43:09.619510 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-05-31 16:43:09.619519 | orchestrator | Saturday 31 May 2025 16:40:11 +0000 (0:00:06.822) 0:05:19.972 ********** 2025-05-31 16:43:09.619529 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-31 16:43:09.619539 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-31 16:43:09.619548 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-31 16:43:09.619558 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-31 16:43:09.619575 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-31 16:43:09.619584 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-31 16:43:09.619594 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-31 16:43:09.619604 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-31 16:43:09.619627 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-31 16:43:09.619638 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-31 16:43:09.619648 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-31 16:43:09.619658 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-31 16:43:09.619667 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-31 16:43:09.619677 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.619687 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-31 16:43:09.619696 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.619706 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-31 16:43:09.619716 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:43:09.619725 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-31 16:43:09.619735 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-31 16:43:09.619745 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-31 16:43:09.619754 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-31 16:43:09.619764 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-31 16:43:09.619774 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-31 16:43:09.619784 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-31 16:43:09.619794 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-31 16:43:09.619810 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-31 16:43:09.619820 | orchestrator | 2025-05-31 16:43:09.619830 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-05-31 16:43:09.619839 | orchestrator | Saturday 31 May 2025 16:40:20 +0000 (0:00:08.931) 0:05:28.903 ********** 2025-05-31 16:43:09.619849 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:43:09.619858 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:43:09.619868 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:43:09.619877 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:43:09.619887 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.619896 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.619906 | orchestrator | 2025-05-31 16:43:09.619916 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-05-31 16:43:09.619926 | orchestrator | Saturday 31 May 2025 16:40:21 +0000 (0:00:00.588) 0:05:29.492 ********** 2025-05-31 16:43:09.619935 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:43:09.619965 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:43:09.619975 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:43:09.619984 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:43:09.619994 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.620004 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.620013 | orchestrator | 2025-05-31 16:43:09.620023 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-05-31 16:43:09.620039 | orchestrator | Saturday 31 May 2025 16:40:21 +0000 (0:00:00.769) 0:05:30.261 ********** 2025-05-31 16:43:09.620049 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:43:09.620059 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.620068 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.620078 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:43:09.620087 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:43:09.620097 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:43:09.620107 | orchestrator | 2025-05-31 16:43:09.620117 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-05-31 16:43:09.620126 | orchestrator | Saturday 31 May 2025 16:40:24 +0000 (0:00:02.903) 0:05:33.165 ********** 2025-05-31 16:43:09.620142 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-31 16:43:09.620153 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-31 16:43:09.620163 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-31 16:43:09.620181 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.620192 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-31 16:43:09.620212 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.620223 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.620234 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:43:09.620244 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.620254 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.620271 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:43:09.620282 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.620302 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.620312 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.620322 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:43:09.620333 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.620343 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.620353 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:43:09.620370 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-31 16:43:09.620387 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-31 16:43:09.620401 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.620412 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.620422 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:43:09.620432 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.620448 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.620464 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.620474 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:43:09.620489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-31 16:43:09.620500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-31 16:43:09.620510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-31 16:43:09.620520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-31 16:43:09.620960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.621003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.621022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.621033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.621044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:43:09.621055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:43:09.621065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.621090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.621101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.621116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.621127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.621137 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.621148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.621158 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:43:09.621168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-31 16:43:09.621190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-31 16:43:09.621201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.621216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.621226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:43:09.621236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.621247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.621267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.621277 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.621287 | orchestrator | 2025-05-31 16:43:09.621297 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-05-31 16:43:09.621307 | orchestrator | Saturday 31 May 2025 16:40:26 +0000 (0:00:01.867) 0:05:35.032 ********** 2025-05-31 16:43:09.621317 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-05-31 16:43:09.621326 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-05-31 16:43:09.621336 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:43:09.621346 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-05-31 16:43:09.621355 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-05-31 16:43:09.621365 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:43:09.621375 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-05-31 16:43:09.621384 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-05-31 16:43:09.621394 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:43:09.621403 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-05-31 16:43:09.621413 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-05-31 16:43:09.621423 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:43:09.621433 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-05-31 16:43:09.621442 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-05-31 16:43:09.621452 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.621462 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-05-31 16:43:09.621471 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-05-31 16:43:09.621480 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.621490 | orchestrator | 2025-05-31 16:43:09.621500 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-05-31 16:43:09.621514 | orchestrator | Saturday 31 May 2025 16:40:27 +0000 (0:00:00.935) 0:05:35.968 ********** 2025-05-31 16:43:09.621524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-31 16:43:09.621541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-31 16:43:09.621553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-31 16:43:09.621570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-31 16:43:09.621583 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-31 16:43:09.621599 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-31 16:43:09.621610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-31 16:43:09.621651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-31 16:43:09.621669 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-31 16:43:09.621682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-31 16:43:09.621697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.621708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:43:09.621724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-31 16:43:09.621735 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-31 16:43:09.621745 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.621760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.621770 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.621785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:43:09.621796 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:43:09.621812 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.621822 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-31 16:43:09.621832 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.621846 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.621857 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:43:09.621867 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.621885 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-31 16:43:09.621901 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.621911 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.621922 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:43:09.621937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-31 16:43:09.621947 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.621957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-31 16:43:09.621973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 16:43:09.621989 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-31 16:43:09.622000 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.622010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-31 16:43:09.622053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.622066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.622081 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-31 16:43:09.622097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-31 16:43:09.622107 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-31 16:43:09.622123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.622134 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.622144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.622164 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.622174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-31 16:43:09.622184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.622199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-31 16:43:09.622210 | orchestrator | 2025-05-31 16:43:09.622219 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-31 16:43:09.622229 | orchestrator | Saturday 31 May 2025 16:40:30 +0000 (0:00:03.296) 0:05:39.265 ********** 2025-05-31 16:43:09.622239 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:43:09.622265 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:43:09.622276 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:43:09.622286 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:43:09.622295 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.622305 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.622314 | orchestrator | 2025-05-31 16:43:09.622324 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-31 16:43:09.622333 | orchestrator | Saturday 31 May 2025 16:40:31 +0000 (0:00:00.857) 0:05:40.122 ********** 2025-05-31 16:43:09.622343 | orchestrator | 2025-05-31 16:43:09.622353 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-31 16:43:09.622368 | orchestrator | Saturday 31 May 2025 16:40:31 +0000 (0:00:00.111) 0:05:40.234 ********** 2025-05-31 16:43:09.622378 | orchestrator | 2025-05-31 16:43:09.622388 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-31 16:43:09.622397 | orchestrator | Saturday 31 May 2025 16:40:32 +0000 (0:00:00.288) 0:05:40.522 ********** 2025-05-31 16:43:09.622407 | orchestrator | 2025-05-31 16:43:09.622416 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-31 16:43:09.622426 | orchestrator | Saturday 31 May 2025 16:40:32 +0000 (0:00:00.109) 0:05:40.631 ********** 2025-05-31 16:43:09.622435 | orchestrator | 2025-05-31 16:43:09.622445 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-31 16:43:09.622455 | orchestrator | Saturday 31 May 2025 16:40:32 +0000 (0:00:00.290) 0:05:40.921 ********** 2025-05-31 16:43:09.622464 | orchestrator | 2025-05-31 16:43:09.622474 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-31 16:43:09.622483 | orchestrator | Saturday 31 May 2025 16:40:32 +0000 (0:00:00.112) 0:05:41.034 ********** 2025-05-31 16:43:09.622493 | orchestrator | 2025-05-31 16:43:09.622502 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-05-31 16:43:09.622516 | orchestrator | Saturday 31 May 2025 16:40:32 +0000 (0:00:00.265) 0:05:41.299 ********** 2025-05-31 16:43:09.622526 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:43:09.622536 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:43:09.622545 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:43:09.622555 | orchestrator | 2025-05-31 16:43:09.622565 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-05-31 16:43:09.622575 | orchestrator | Saturday 31 May 2025 16:40:45 +0000 (0:00:12.303) 0:05:53.603 ********** 2025-05-31 16:43:09.622584 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:43:09.622594 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:43:09.622603 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:43:09.622629 | orchestrator | 2025-05-31 16:43:09.622640 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-05-31 16:43:09.622650 | orchestrator | Saturday 31 May 2025 16:40:55 +0000 (0:00:10.494) 0:06:04.097 ********** 2025-05-31 16:43:09.622660 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:43:09.622670 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:43:09.622679 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:43:09.622689 | orchestrator | 2025-05-31 16:43:09.622699 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-05-31 16:43:09.622708 | orchestrator | Saturday 31 May 2025 16:41:15 +0000 (0:00:20.109) 0:06:24.207 ********** 2025-05-31 16:43:09.622718 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:43:09.622727 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:43:09.622737 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:43:09.622746 | orchestrator | 2025-05-31 16:43:09.622756 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-05-31 16:43:09.622766 | orchestrator | Saturday 31 May 2025 16:41:40 +0000 (0:00:24.296) 0:06:48.504 ********** 2025-05-31 16:43:09.622776 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:43:09.622786 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:43:09.622795 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:43:09.622805 | orchestrator | 2025-05-31 16:43:09.622815 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-05-31 16:43:09.622824 | orchestrator | Saturday 31 May 2025 16:41:40 +0000 (0:00:00.765) 0:06:49.269 ********** 2025-05-31 16:43:09.622834 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:43:09.622844 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:43:09.622853 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:43:09.622863 | orchestrator | 2025-05-31 16:43:09.622872 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-05-31 16:43:09.622882 | orchestrator | Saturday 31 May 2025 16:41:41 +0000 (0:00:00.928) 0:06:50.197 ********** 2025-05-31 16:43:09.622892 | orchestrator | changed: [testbed-node-4] 2025-05-31 16:43:09.622909 | orchestrator | changed: [testbed-node-5] 2025-05-31 16:43:09.622919 | orchestrator | changed: [testbed-node-3] 2025-05-31 16:43:09.622928 | orchestrator | 2025-05-31 16:43:09.622938 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-05-31 16:43:09.622948 | orchestrator | Saturday 31 May 2025 16:42:03 +0000 (0:00:21.509) 0:07:11.707 ********** 2025-05-31 16:43:09.622957 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:43:09.622967 | orchestrator | 2025-05-31 16:43:09.622976 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-05-31 16:43:09.622986 | orchestrator | Saturday 31 May 2025 16:42:03 +0000 (0:00:00.124) 0:07:11.832 ********** 2025-05-31 16:43:09.622995 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:43:09.623005 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:43:09.623015 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:43:09.623024 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.623033 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.623056 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-05-31 16:43:09.623067 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-31 16:43:09.623076 | orchestrator | 2025-05-31 16:43:09.623092 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-05-31 16:43:09.623102 | orchestrator | Saturday 31 May 2025 16:42:24 +0000 (0:00:21.398) 0:07:33.230 ********** 2025-05-31 16:43:09.623112 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:43:09.623122 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:43:09.623131 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:43:09.623141 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.623150 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.623160 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:43:09.623169 | orchestrator | 2025-05-31 16:43:09.623179 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-05-31 16:43:09.623188 | orchestrator | Saturday 31 May 2025 16:42:33 +0000 (0:00:08.645) 0:07:41.876 ********** 2025-05-31 16:43:09.623198 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.623208 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:43:09.623217 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:43:09.623227 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:43:09.623236 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.623246 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2025-05-31 16:43:09.623255 | orchestrator | 2025-05-31 16:43:09.623265 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-31 16:43:09.623274 | orchestrator | Saturday 31 May 2025 16:42:36 +0000 (0:00:02.969) 0:07:44.846 ********** 2025-05-31 16:43:09.623284 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-31 16:43:09.623294 | orchestrator | 2025-05-31 16:43:09.623303 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-31 16:43:09.623325 | orchestrator | Saturday 31 May 2025 16:42:46 +0000 (0:00:10.393) 0:07:55.239 ********** 2025-05-31 16:43:09.623335 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-31 16:43:09.623345 | orchestrator | 2025-05-31 16:43:09.623354 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-05-31 16:43:09.623364 | orchestrator | Saturday 31 May 2025 16:42:48 +0000 (0:00:01.187) 0:07:56.427 ********** 2025-05-31 16:43:09.623373 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:43:09.623383 | orchestrator | 2025-05-31 16:43:09.623397 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-05-31 16:43:09.623407 | orchestrator | Saturday 31 May 2025 16:42:49 +0000 (0:00:01.040) 0:07:57.467 ********** 2025-05-31 16:43:09.623416 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-31 16:43:09.623426 | orchestrator | 2025-05-31 16:43:09.623436 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-05-31 16:43:09.623452 | orchestrator | Saturday 31 May 2025 16:42:58 +0000 (0:00:09.832) 0:08:07.299 ********** 2025-05-31 16:43:09.623462 | orchestrator | ok: [testbed-node-3] 2025-05-31 16:43:09.623471 | orchestrator | ok: [testbed-node-4] 2025-05-31 16:43:09.623481 | orchestrator | ok: [testbed-node-5] 2025-05-31 16:43:09.623490 | orchestrator | ok: [testbed-node-0] 2025-05-31 16:43:09.623500 | orchestrator | ok: [testbed-node-1] 2025-05-31 16:43:09.623509 | orchestrator | ok: [testbed-node-2] 2025-05-31 16:43:09.623519 | orchestrator | 2025-05-31 16:43:09.623528 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-05-31 16:43:09.623538 | orchestrator | 2025-05-31 16:43:09.623547 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-05-31 16:43:09.623557 | orchestrator | Saturday 31 May 2025 16:43:01 +0000 (0:00:02.157) 0:08:09.456 ********** 2025-05-31 16:43:09.623567 | orchestrator | changed: [testbed-node-0] 2025-05-31 16:43:09.623576 | orchestrator | changed: [testbed-node-2] 2025-05-31 16:43:09.623586 | orchestrator | changed: [testbed-node-1] 2025-05-31 16:43:09.623595 | orchestrator | 2025-05-31 16:43:09.623605 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-05-31 16:43:09.623631 | orchestrator | 2025-05-31 16:43:09.623641 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-05-31 16:43:09.623651 | orchestrator | Saturday 31 May 2025 16:43:02 +0000 (0:00:00.951) 0:08:10.408 ********** 2025-05-31 16:43:09.623660 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:43:09.623670 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.623679 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.623689 | orchestrator | 2025-05-31 16:43:09.623698 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-05-31 16:43:09.623708 | orchestrator | 2025-05-31 16:43:09.623718 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-05-31 16:43:09.623727 | orchestrator | Saturday 31 May 2025 16:43:02 +0000 (0:00:00.762) 0:08:11.171 ********** 2025-05-31 16:43:09.623737 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-05-31 16:43:09.623746 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-05-31 16:43:09.623756 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-05-31 16:43:09.623765 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-05-31 16:43:09.623775 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-05-31 16:43:09.623785 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-05-31 16:43:09.623794 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-05-31 16:43:09.623804 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-05-31 16:43:09.623813 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-05-31 16:43:09.623823 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-05-31 16:43:09.623833 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-05-31 16:43:09.623843 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-05-31 16:43:09.623852 | orchestrator | skipping: [testbed-node-3] 2025-05-31 16:43:09.623862 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-05-31 16:43:09.623872 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-05-31 16:43:09.623881 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-05-31 16:43:09.623891 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-05-31 16:43:09.623906 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-05-31 16:43:09.623916 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-05-31 16:43:09.623926 | orchestrator | skipping: [testbed-node-4] 2025-05-31 16:43:09.623935 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-05-31 16:43:09.623945 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-05-31 16:43:09.623961 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-05-31 16:43:09.623971 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-05-31 16:43:09.623981 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-05-31 16:43:09.623991 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-05-31 16:43:09.624001 | orchestrator | skipping: [testbed-node-5] 2025-05-31 16:43:09.624010 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-05-31 16:43:09.624020 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-05-31 16:43:09.624030 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-05-31 16:43:09.624040 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-05-31 16:43:09.624050 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-05-31 16:43:09.624059 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-05-31 16:43:09.624069 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:43:09.624079 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.624088 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-05-31 16:43:09.624098 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-05-31 16:43:09.624120 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-05-31 16:43:09.624130 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-05-31 16:43:09.624140 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-05-31 16:43:09.624149 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-05-31 16:43:09.624163 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.624173 | orchestrator | 2025-05-31 16:43:09.624183 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-05-31 16:43:09.624193 | orchestrator | 2025-05-31 16:43:09.624202 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-05-31 16:43:09.624212 | orchestrator | Saturday 31 May 2025 16:43:04 +0000 (0:00:01.323) 0:08:12.494 ********** 2025-05-31 16:43:09.624222 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-05-31 16:43:09.624231 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-05-31 16:43:09.624241 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:43:09.624251 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-05-31 16:43:09.624260 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-05-31 16:43:09.624270 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.624279 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-05-31 16:43:09.624289 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-05-31 16:43:09.624299 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.624309 | orchestrator | 2025-05-31 16:43:09.624318 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-05-31 16:43:09.624328 | orchestrator | 2025-05-31 16:43:09.624338 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-05-31 16:43:09.624348 | orchestrator | Saturday 31 May 2025 16:43:04 +0000 (0:00:00.742) 0:08:13.236 ********** 2025-05-31 16:43:09.624357 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:43:09.624367 | orchestrator | 2025-05-31 16:43:09.624377 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-05-31 16:43:09.624386 | orchestrator | 2025-05-31 16:43:09.624396 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-05-31 16:43:09.624406 | orchestrator | Saturday 31 May 2025 16:43:05 +0000 (0:00:00.897) 0:08:14.134 ********** 2025-05-31 16:43:09.624416 | orchestrator | skipping: [testbed-node-0] 2025-05-31 16:43:09.624425 | orchestrator | skipping: [testbed-node-1] 2025-05-31 16:43:09.624435 | orchestrator | skipping: [testbed-node-2] 2025-05-31 16:43:09.624444 | orchestrator | 2025-05-31 16:43:09.624454 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 16:43:09.624473 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 16:43:09.624484 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-05-31 16:43:09.624494 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-05-31 16:43:09.624504 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-05-31 16:43:09.624514 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-05-31 16:43:09.624523 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-05-31 16:43:09.624533 | orchestrator | testbed-node-5 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-05-31 16:43:09.624543 | orchestrator | 2025-05-31 16:43:09.624553 | orchestrator | 2025-05-31 16:43:09.624567 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 16:43:09.624577 | orchestrator | Saturday 31 May 2025 16:43:06 +0000 (0:00:00.484) 0:08:14.619 ********** 2025-05-31 16:43:09.624587 | orchestrator | =============================================================================== 2025-05-31 16:43:09.624597 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 31.89s 2025-05-31 16:43:09.624607 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 24.30s 2025-05-31 16:43:09.624641 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 21.51s 2025-05-31 16:43:09.624651 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.40s 2025-05-31 16:43:09.624660 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 20.11s 2025-05-31 16:43:09.624670 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 19.62s 2025-05-31 16:43:09.624680 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 18.64s 2025-05-31 16:43:09.624689 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 16.96s 2025-05-31 16:43:09.624699 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.01s 2025-05-31 16:43:09.624709 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.30s 2025-05-31 16:43:09.624719 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.29s 2025-05-31 16:43:09.624728 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.16s 2025-05-31 16:43:09.624738 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.06s 2025-05-31 16:43:09.624747 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 10.49s 2025-05-31 16:43:09.624757 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.39s 2025-05-31 16:43:09.624771 | orchestrator | nova-cell : Discover nova hosts ----------------------------------------- 9.83s 2025-05-31 16:43:09.624781 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 8.93s 2025-05-31 16:43:09.624791 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.88s 2025-05-31 16:43:09.624800 | orchestrator | nova : Restart nova-api container --------------------------------------- 8.74s 2025-05-31 16:43:09.624810 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.65s 2025-05-31 16:43:12.651704 | orchestrator | 2025-05-31 16:43:12 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:43:12.651837 | orchestrator | 2025-05-31 16:43:12 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:43:15.696683 | orchestrator | 2025-05-31 16:43:15 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:43:15.696784 | orchestrator | 2025-05-31 16:43:15 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:43:18.741179 | orchestrator | 2025-05-31 16:43:18 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:43:18.741279 | orchestrator | 2025-05-31 16:43:18 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:43:21.789181 | orchestrator | 2025-05-31 16:43:21 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:43:21.789278 | orchestrator | 2025-05-31 16:43:21 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:43:24.839941 | orchestrator | 2025-05-31 16:43:24 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:43:24.840056 | orchestrator | 2025-05-31 16:43:24 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:43:27.890979 | orchestrator | 2025-05-31 16:43:27 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:43:27.891079 | orchestrator | 2025-05-31 16:43:27 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:43:30.944159 | orchestrator | 2025-05-31 16:43:30 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:43:30.944258 | orchestrator | 2025-05-31 16:43:30 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:43:33.992627 | orchestrator | 2025-05-31 16:43:33 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:43:33.992794 | orchestrator | 2025-05-31 16:43:33 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:43:37.039927 | orchestrator | 2025-05-31 16:43:37 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:43:37.040034 | orchestrator | 2025-05-31 16:43:37 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:43:40.081399 | orchestrator | 2025-05-31 16:43:40 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:43:40.081474 | orchestrator | 2025-05-31 16:43:40 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:43:43.126994 | orchestrator | 2025-05-31 16:43:43 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:43:43.127103 | orchestrator | 2025-05-31 16:43:43 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:43:46.174569 | orchestrator | 2025-05-31 16:43:46 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:43:46.174689 | orchestrator | 2025-05-31 16:43:46 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:43:49.216901 | orchestrator | 2025-05-31 16:43:49 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:43:49.217009 | orchestrator | 2025-05-31 16:43:49 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:43:52.267587 | orchestrator | 2025-05-31 16:43:52 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:43:52.267733 | orchestrator | 2025-05-31 16:43:52 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:43:55.317279 | orchestrator | 2025-05-31 16:43:55 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:43:55.317391 | orchestrator | 2025-05-31 16:43:55 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:43:58.366982 | orchestrator | 2025-05-31 16:43:58 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:43:58.367113 | orchestrator | 2025-05-31 16:43:58 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:44:01.416648 | orchestrator | 2025-05-31 16:44:01 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:44:01.416801 | orchestrator | 2025-05-31 16:44:01 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:44:04.462211 | orchestrator | 2025-05-31 16:44:04 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:44:04.462302 | orchestrator | 2025-05-31 16:44:04 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:44:07.505391 | orchestrator | 2025-05-31 16:44:07 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:44:07.505497 | orchestrator | 2025-05-31 16:44:07 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:44:10.554622 | orchestrator | 2025-05-31 16:44:10 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:44:10.554770 | orchestrator | 2025-05-31 16:44:10 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:44:13.601025 | orchestrator | 2025-05-31 16:44:13 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:44:13.601132 | orchestrator | 2025-05-31 16:44:13 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:44:16.642978 | orchestrator | 2025-05-31 16:44:16 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:44:16.643083 | orchestrator | 2025-05-31 16:44:16 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:44:19.688965 | orchestrator | 2025-05-31 16:44:19 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:44:19.689071 | orchestrator | 2025-05-31 16:44:19 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:44:22.740941 | orchestrator | 2025-05-31 16:44:22 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:44:22.741043 | orchestrator | 2025-05-31 16:44:22 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:44:25.788199 | orchestrator | 2025-05-31 16:44:25 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:44:25.788315 | orchestrator | 2025-05-31 16:44:25 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:44:28.834633 | orchestrator | 2025-05-31 16:44:28 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:44:28.834769 | orchestrator | 2025-05-31 16:44:28 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:44:31.884024 | orchestrator | 2025-05-31 16:44:31 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:44:31.884130 | orchestrator | 2025-05-31 16:44:31 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:44:34.934475 | orchestrator | 2025-05-31 16:44:34 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:44:34.934555 | orchestrator | 2025-05-31 16:44:34 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:44:37.982251 | orchestrator | 2025-05-31 16:44:37 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:44:37.982370 | orchestrator | 2025-05-31 16:44:37 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:44:41.035145 | orchestrator | 2025-05-31 16:44:41 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:44:41.035225 | orchestrator | 2025-05-31 16:44:41 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:44:44.085102 | orchestrator | 2025-05-31 16:44:44 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:44:44.085247 | orchestrator | 2025-05-31 16:44:44 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:44:47.131321 | orchestrator | 2025-05-31 16:44:47 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:44:47.131421 | orchestrator | 2025-05-31 16:44:47 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:44:50.179230 | orchestrator | 2025-05-31 16:44:50 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:44:50.179335 | orchestrator | 2025-05-31 16:44:50 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:44:53.225277 | orchestrator | 2025-05-31 16:44:53 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:44:53.225380 | orchestrator | 2025-05-31 16:44:53 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:44:56.273661 | orchestrator | 2025-05-31 16:44:56 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:44:56.273813 | orchestrator | 2025-05-31 16:44:56 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:44:59.321588 | orchestrator | 2025-05-31 16:44:59 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:44:59.321690 | orchestrator | 2025-05-31 16:44:59 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:45:02.361847 | orchestrator | 2025-05-31 16:45:02 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:45:02.361958 | orchestrator | 2025-05-31 16:45:02 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:45:05.414848 | orchestrator | 2025-05-31 16:45:05 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:45:05.414942 | orchestrator | 2025-05-31 16:45:05 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:45:08.456836 | orchestrator | 2025-05-31 16:45:08 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:45:08.456946 | orchestrator | 2025-05-31 16:45:08 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:45:11.503787 | orchestrator | 2025-05-31 16:45:11 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:45:11.503889 | orchestrator | 2025-05-31 16:45:11 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:45:14.553012 | orchestrator | 2025-05-31 16:45:14 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:45:14.553112 | orchestrator | 2025-05-31 16:45:14 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:45:17.596254 | orchestrator | 2025-05-31 16:45:17 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:45:17.596360 | orchestrator | 2025-05-31 16:45:17 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:45:20.646232 | orchestrator | 2025-05-31 16:45:20 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:45:20.646351 | orchestrator | 2025-05-31 16:45:20 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:45:23.704400 | orchestrator | 2025-05-31 16:45:23 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:45:23.704554 | orchestrator | 2025-05-31 16:45:23 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:45:26.752133 | orchestrator | 2025-05-31 16:45:26 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:45:26.752226 | orchestrator | 2025-05-31 16:45:26 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:45:29.799765 | orchestrator | 2025-05-31 16:45:29 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:45:29.799922 | orchestrator | 2025-05-31 16:45:29 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:45:32.851357 | orchestrator | 2025-05-31 16:45:32 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:45:32.851452 | orchestrator | 2025-05-31 16:45:32 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:45:35.901719 | orchestrator | 2025-05-31 16:45:35 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:45:35.901881 | orchestrator | 2025-05-31 16:45:35 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:45:38.950705 | orchestrator | 2025-05-31 16:45:38 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:45:38.950872 | orchestrator | 2025-05-31 16:45:38 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:45:41.995983 | orchestrator | 2025-05-31 16:45:41 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:45:41.996092 | orchestrator | 2025-05-31 16:45:41 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:45:45.042946 | orchestrator | 2025-05-31 16:45:45 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:45:45.043049 | orchestrator | 2025-05-31 16:45:45 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:45:48.087310 | orchestrator | 2025-05-31 16:45:48 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:45:48.087398 | orchestrator | 2025-05-31 16:45:48 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:45:51.139922 | orchestrator | 2025-05-31 16:45:51 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:45:51.140025 | orchestrator | 2025-05-31 16:45:51 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:45:54.204192 | orchestrator | 2025-05-31 16:45:54 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:45:54.204300 | orchestrator | 2025-05-31 16:45:54 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:45:57.247290 | orchestrator | 2025-05-31 16:45:57 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:45:57.247392 | orchestrator | 2025-05-31 16:45:57 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:46:00.297061 | orchestrator | 2025-05-31 16:46:00 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:46:00.297163 | orchestrator | 2025-05-31 16:46:00 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:46:03.342161 | orchestrator | 2025-05-31 16:46:03 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:46:03.342261 | orchestrator | 2025-05-31 16:46:03 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:46:06.391014 | orchestrator | 2025-05-31 16:46:06 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:46:06.391123 | orchestrator | 2025-05-31 16:46:06 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:46:09.440942 | orchestrator | 2025-05-31 16:46:09 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:46:09.441047 | orchestrator | 2025-05-31 16:46:09 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:46:12.487505 | orchestrator | 2025-05-31 16:46:12 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:46:12.487616 | orchestrator | 2025-05-31 16:46:12 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:46:15.539346 | orchestrator | 2025-05-31 16:46:15 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:46:15.539471 | orchestrator | 2025-05-31 16:46:15 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:46:18.583696 | orchestrator | 2025-05-31 16:46:18 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:46:18.583854 | orchestrator | 2025-05-31 16:46:18 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:46:21.632510 | orchestrator | 2025-05-31 16:46:21 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:46:21.632608 | orchestrator | 2025-05-31 16:46:21 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:46:24.683680 | orchestrator | 2025-05-31 16:46:24 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:46:24.683855 | orchestrator | 2025-05-31 16:46:24 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:46:27.726725 | orchestrator | 2025-05-31 16:46:27 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:46:27.726879 | orchestrator | 2025-05-31 16:46:27 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:46:30.774219 | orchestrator | 2025-05-31 16:46:30 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:46:30.774326 | orchestrator | 2025-05-31 16:46:30 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:46:33.825690 | orchestrator | 2025-05-31 16:46:33 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:46:33.826255 | orchestrator | 2025-05-31 16:46:33 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:46:36.878475 | orchestrator | 2025-05-31 16:46:36 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:46:36.878589 | orchestrator | 2025-05-31 16:46:36 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:46:39.929975 | orchestrator | 2025-05-31 16:46:39 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:46:39.930134 | orchestrator | 2025-05-31 16:46:39 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:46:42.981948 | orchestrator | 2025-05-31 16:46:42 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:46:42.982106 | orchestrator | 2025-05-31 16:46:42 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:46:46.027610 | orchestrator | 2025-05-31 16:46:46 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:46:46.027722 | orchestrator | 2025-05-31 16:46:46 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:46:49.080185 | orchestrator | 2025-05-31 16:46:49 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:46:49.080289 | orchestrator | 2025-05-31 16:46:49 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:46:52.124457 | orchestrator | 2025-05-31 16:46:52 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:46:52.124559 | orchestrator | 2025-05-31 16:46:52 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:46:55.169347 | orchestrator | 2025-05-31 16:46:55 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:46:55.169438 | orchestrator | 2025-05-31 16:46:55 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:46:58.219067 | orchestrator | 2025-05-31 16:46:58 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:46:58.219170 | orchestrator | 2025-05-31 16:46:58 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:47:01.265961 | orchestrator | 2025-05-31 16:47:01 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:47:01.266148 | orchestrator | 2025-05-31 16:47:01 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:47:04.318257 | orchestrator | 2025-05-31 16:47:04 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:47:04.318329 | orchestrator | 2025-05-31 16:47:04 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:47:07.364293 | orchestrator | 2025-05-31 16:47:07 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:47:07.364405 | orchestrator | 2025-05-31 16:47:07 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:47:10.410351 | orchestrator | 2025-05-31 16:47:10 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:47:10.410449 | orchestrator | 2025-05-31 16:47:10 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:47:13.455961 | orchestrator | 2025-05-31 16:47:13 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:47:13.456075 | orchestrator | 2025-05-31 16:47:13 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:47:16.507299 | orchestrator | 2025-05-31 16:47:16 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:47:16.507405 | orchestrator | 2025-05-31 16:47:16 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:47:19.554737 | orchestrator | 2025-05-31 16:47:19 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:47:19.554896 | orchestrator | 2025-05-31 16:47:19 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:47:22.599819 | orchestrator | 2025-05-31 16:47:22 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:47:22.599978 | orchestrator | 2025-05-31 16:47:22 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:47:25.646535 | orchestrator | 2025-05-31 16:47:25 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:47:25.646675 | orchestrator | 2025-05-31 16:47:25 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:47:28.697159 | orchestrator | 2025-05-31 16:47:28 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:47:28.697263 | orchestrator | 2025-05-31 16:47:28 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:47:31.743062 | orchestrator | 2025-05-31 16:47:31 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:47:31.743166 | orchestrator | 2025-05-31 16:47:31 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:47:34.793247 | orchestrator | 2025-05-31 16:47:34 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:47:34.793347 | orchestrator | 2025-05-31 16:47:34 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:47:37.853698 | orchestrator | 2025-05-31 16:47:37 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:47:37.853832 | orchestrator | 2025-05-31 16:47:37 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:47:40.901412 | orchestrator | 2025-05-31 16:47:40 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:47:40.901536 | orchestrator | 2025-05-31 16:47:40 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:47:43.949629 | orchestrator | 2025-05-31 16:47:43 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:47:43.949747 | orchestrator | 2025-05-31 16:47:43 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:47:46.999084 | orchestrator | 2025-05-31 16:47:46 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:47:46.999263 | orchestrator | 2025-05-31 16:47:46 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:47:50.052712 | orchestrator | 2025-05-31 16:47:50 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:47:50.052847 | orchestrator | 2025-05-31 16:47:50 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:47:53.102992 | orchestrator | 2025-05-31 16:47:53 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:47:53.103122 | orchestrator | 2025-05-31 16:47:53 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:47:56.152596 | orchestrator | 2025-05-31 16:47:56 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:47:56.152727 | orchestrator | 2025-05-31 16:47:56 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:47:59.198615 | orchestrator | 2025-05-31 16:47:59 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:47:59.198751 | orchestrator | 2025-05-31 16:47:59 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:48:02.246616 | orchestrator | 2025-05-31 16:48:02 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:48:02.246721 | orchestrator | 2025-05-31 16:48:02 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:48:05.294555 | orchestrator | 2025-05-31 16:48:05 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:48:05.294664 | orchestrator | 2025-05-31 16:48:05 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:48:08.345066 | orchestrator | 2025-05-31 16:48:08 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:48:08.345167 | orchestrator | 2025-05-31 16:48:08 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:48:11.392773 | orchestrator | 2025-05-31 16:48:11 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:48:11.392950 | orchestrator | 2025-05-31 16:48:11 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:48:14.439762 | orchestrator | 2025-05-31 16:48:14 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:48:14.439865 | orchestrator | 2025-05-31 16:48:14 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:48:17.479497 | orchestrator | 2025-05-31 16:48:17 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:48:17.479604 | orchestrator | 2025-05-31 16:48:17 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:48:20.525580 | orchestrator | 2025-05-31 16:48:20 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:48:20.525682 | orchestrator | 2025-05-31 16:48:20 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:48:23.575594 | orchestrator | 2025-05-31 16:48:23 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:48:23.575698 | orchestrator | 2025-05-31 16:48:23 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:48:26.625365 | orchestrator | 2025-05-31 16:48:26 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:48:26.625469 | orchestrator | 2025-05-31 16:48:26 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:48:29.676684 | orchestrator | 2025-05-31 16:48:29 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:48:29.676794 | orchestrator | 2025-05-31 16:48:29 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:48:32.723740 | orchestrator | 2025-05-31 16:48:32 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:48:32.723872 | orchestrator | 2025-05-31 16:48:32 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:48:35.774406 | orchestrator | 2025-05-31 16:48:35 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:48:35.774495 | orchestrator | 2025-05-31 16:48:35 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:48:38.831319 | orchestrator | 2025-05-31 16:48:38 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:48:38.831422 | orchestrator | 2025-05-31 16:48:38 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:48:41.877583 | orchestrator | 2025-05-31 16:48:41 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:48:41.877709 | orchestrator | 2025-05-31 16:48:41 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:48:44.927504 | orchestrator | 2025-05-31 16:48:44 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:48:44.927601 | orchestrator | 2025-05-31 16:48:44 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:48:47.973857 | orchestrator | 2025-05-31 16:48:47 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:48:47.974006 | orchestrator | 2025-05-31 16:48:47 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:48:51.021075 | orchestrator | 2025-05-31 16:48:51 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:48:51.021179 | orchestrator | 2025-05-31 16:48:51 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:48:54.072965 | orchestrator | 2025-05-31 16:48:54 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:48:54.073072 | orchestrator | 2025-05-31 16:48:54 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:48:57.122906 | orchestrator | 2025-05-31 16:48:57 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:48:57.123048 | orchestrator | 2025-05-31 16:48:57 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:49:00.176182 | orchestrator | 2025-05-31 16:49:00 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:49:00.176293 | orchestrator | 2025-05-31 16:49:00 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:49:03.226198 | orchestrator | 2025-05-31 16:49:03 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:49:03.226304 | orchestrator | 2025-05-31 16:49:03 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:49:06.266252 | orchestrator | 2025-05-31 16:49:06 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:49:06.266361 | orchestrator | 2025-05-31 16:49:06 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:49:09.313233 | orchestrator | 2025-05-31 16:49:09 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:49:09.313336 | orchestrator | 2025-05-31 16:49:09 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:49:12.366084 | orchestrator | 2025-05-31 16:49:12 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:49:12.366201 | orchestrator | 2025-05-31 16:49:12 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:49:15.414800 | orchestrator | 2025-05-31 16:49:15 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:49:15.415002 | orchestrator | 2025-05-31 16:49:15 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:49:18.457601 | orchestrator | 2025-05-31 16:49:18 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:49:18.457726 | orchestrator | 2025-05-31 16:49:18 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:49:21.507170 | orchestrator | 2025-05-31 16:49:21 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:49:21.507304 | orchestrator | 2025-05-31 16:49:21 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:49:24.548147 | orchestrator | 2025-05-31 16:49:24 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:49:24.548292 | orchestrator | 2025-05-31 16:49:24 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:49:27.595477 | orchestrator | 2025-05-31 16:49:27 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:49:27.595613 | orchestrator | 2025-05-31 16:49:27 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:49:30.645303 | orchestrator | 2025-05-31 16:49:30 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:49:30.645439 | orchestrator | 2025-05-31 16:49:30 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:49:33.696267 | orchestrator | 2025-05-31 16:49:33 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:49:33.696369 | orchestrator | 2025-05-31 16:49:33 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:49:36.748570 | orchestrator | 2025-05-31 16:49:36 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:49:36.748700 | orchestrator | 2025-05-31 16:49:36 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:49:39.797357 | orchestrator | 2025-05-31 16:49:39 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:49:39.797486 | orchestrator | 2025-05-31 16:49:39 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:49:42.855869 | orchestrator | 2025-05-31 16:49:42 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:49:42.856060 | orchestrator | 2025-05-31 16:49:42 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:49:45.913330 | orchestrator | 2025-05-31 16:49:45 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:49:45.913459 | orchestrator | 2025-05-31 16:49:45 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:49:48.955532 | orchestrator | 2025-05-31 16:49:48 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:49:48.955669 | orchestrator | 2025-05-31 16:49:48 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:49:52.010624 | orchestrator | 2025-05-31 16:49:52 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:49:52.010755 | orchestrator | 2025-05-31 16:49:52 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:49:55.065794 | orchestrator | 2025-05-31 16:49:55 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:49:55.065904 | orchestrator | 2025-05-31 16:49:55 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:49:58.112668 | orchestrator | 2025-05-31 16:49:58 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:49:58.112774 | orchestrator | 2025-05-31 16:49:58 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:50:01.155953 | orchestrator | 2025-05-31 16:50:01 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:50:01.156166 | orchestrator | 2025-05-31 16:50:01 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:50:04.204558 | orchestrator | 2025-05-31 16:50:04 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:50:04.204670 | orchestrator | 2025-05-31 16:50:04 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:50:07.250735 | orchestrator | 2025-05-31 16:50:07 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:50:07.250842 | orchestrator | 2025-05-31 16:50:07 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:50:10.299854 | orchestrator | 2025-05-31 16:50:10 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:50:10.300020 | orchestrator | 2025-05-31 16:50:10 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:50:13.354941 | orchestrator | 2025-05-31 16:50:13 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:50:13.355059 | orchestrator | 2025-05-31 16:50:13 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:50:16.412701 | orchestrator | 2025-05-31 16:50:16 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:50:16.412803 | orchestrator | 2025-05-31 16:50:16 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:50:19.461644 | orchestrator | 2025-05-31 16:50:19 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:50:19.461758 | orchestrator | 2025-05-31 16:50:19 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:50:22.525680 | orchestrator | 2025-05-31 16:50:22 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:50:22.525785 | orchestrator | 2025-05-31 16:50:22 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:50:25.584188 | orchestrator | 2025-05-31 16:50:25 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:50:25.584292 | orchestrator | 2025-05-31 16:50:25 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:50:28.638162 | orchestrator | 2025-05-31 16:50:28 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:50:28.638293 | orchestrator | 2025-05-31 16:50:28 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:50:31.678988 | orchestrator | 2025-05-31 16:50:31 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:50:31.679144 | orchestrator | 2025-05-31 16:50:31 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:50:34.726227 | orchestrator | 2025-05-31 16:50:34 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:50:34.726301 | orchestrator | 2025-05-31 16:50:34 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:50:37.771564 | orchestrator | 2025-05-31 16:50:37 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:50:37.771666 | orchestrator | 2025-05-31 16:50:37 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:50:40.817731 | orchestrator | 2025-05-31 16:50:40 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:50:40.817832 | orchestrator | 2025-05-31 16:50:40 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:50:43.869375 | orchestrator | 2025-05-31 16:50:43 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:50:43.869482 | orchestrator | 2025-05-31 16:50:43 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:50:46.918224 | orchestrator | 2025-05-31 16:50:46 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:50:46.918330 | orchestrator | 2025-05-31 16:50:46 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:50:49.968412 | orchestrator | 2025-05-31 16:50:49 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:50:49.968515 | orchestrator | 2025-05-31 16:50:49 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:50:53.014801 | orchestrator | 2025-05-31 16:50:53 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:50:53.014909 | orchestrator | 2025-05-31 16:50:53 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:50:56.063475 | orchestrator | 2025-05-31 16:50:56 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:50:56.063583 | orchestrator | 2025-05-31 16:50:56 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:50:59.109759 | orchestrator | 2025-05-31 16:50:59 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:50:59.109849 | orchestrator | 2025-05-31 16:50:59 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:51:02.162205 | orchestrator | 2025-05-31 16:51:02 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:51:02.162314 | orchestrator | 2025-05-31 16:51:02 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:51:05.209886 | orchestrator | 2025-05-31 16:51:05 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:51:05.209991 | orchestrator | 2025-05-31 16:51:05 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:51:08.256731 | orchestrator | 2025-05-31 16:51:08 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:51:08.256836 | orchestrator | 2025-05-31 16:51:08 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:51:11.297613 | orchestrator | 2025-05-31 16:51:11 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:51:11.297660 | orchestrator | 2025-05-31 16:51:11 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:51:14.346501 | orchestrator | 2025-05-31 16:51:14 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:51:14.346622 | orchestrator | 2025-05-31 16:51:14 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:51:17.402423 | orchestrator | 2025-05-31 16:51:17 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:51:17.402525 | orchestrator | 2025-05-31 16:51:17 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:51:20.449301 | orchestrator | 2025-05-31 16:51:20 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:51:20.449401 | orchestrator | 2025-05-31 16:51:20 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:51:23.497322 | orchestrator | 2025-05-31 16:51:23 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:51:23.497421 | orchestrator | 2025-05-31 16:51:23 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:51:26.541365 | orchestrator | 2025-05-31 16:51:26 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:51:26.541475 | orchestrator | 2025-05-31 16:51:26 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:51:29.591142 | orchestrator | 2025-05-31 16:51:29 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:51:29.591253 | orchestrator | 2025-05-31 16:51:29 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:51:32.641497 | orchestrator | 2025-05-31 16:51:32 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:51:32.642351 | orchestrator | 2025-05-31 16:51:32 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:51:35.699564 | orchestrator | 2025-05-31 16:51:35 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:51:35.699668 | orchestrator | 2025-05-31 16:51:35 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:51:38.752034 | orchestrator | 2025-05-31 16:51:38 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:51:38.752226 | orchestrator | 2025-05-31 16:51:38 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:51:41.799695 | orchestrator | 2025-05-31 16:51:41 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:51:41.799808 | orchestrator | 2025-05-31 16:51:41 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:51:44.847837 | orchestrator | 2025-05-31 16:51:44 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:51:44.847982 | orchestrator | 2025-05-31 16:51:44 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:51:47.895664 | orchestrator | 2025-05-31 16:51:47 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:51:47.895760 | orchestrator | 2025-05-31 16:51:47 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:51:50.941055 | orchestrator | 2025-05-31 16:51:50 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:51:50.941193 | orchestrator | 2025-05-31 16:51:50 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:51:53.991841 | orchestrator | 2025-05-31 16:51:53 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:51:53.991989 | orchestrator | 2025-05-31 16:51:53 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:51:57.041974 | orchestrator | 2025-05-31 16:51:57 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:51:57.042193 | orchestrator | 2025-05-31 16:51:57 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:52:00.088590 | orchestrator | 2025-05-31 16:52:00 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:52:00.088711 | orchestrator | 2025-05-31 16:52:00 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:52:03.138200 | orchestrator | 2025-05-31 16:52:03 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:52:03.138277 | orchestrator | 2025-05-31 16:52:03 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:52:06.201691 | orchestrator | 2025-05-31 16:52:06 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:52:06.201795 | orchestrator | 2025-05-31 16:52:06 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:52:09.249345 | orchestrator | 2025-05-31 16:52:09 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:52:09.249443 | orchestrator | 2025-05-31 16:52:09 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:52:12.297921 | orchestrator | 2025-05-31 16:52:12 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:52:12.297991 | orchestrator | 2025-05-31 16:52:12 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:52:15.342962 | orchestrator | 2025-05-31 16:52:15 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:52:15.343033 | orchestrator | 2025-05-31 16:52:15 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:52:18.383577 | orchestrator | 2025-05-31 16:52:18 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:52:18.383640 | orchestrator | 2025-05-31 16:52:18 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:52:21.428944 | orchestrator | 2025-05-31 16:52:21 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:52:21.429057 | orchestrator | 2025-05-31 16:52:21 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:52:24.471717 | orchestrator | 2025-05-31 16:52:24 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:52:24.471818 | orchestrator | 2025-05-31 16:52:24 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:52:27.522872 | orchestrator | 2025-05-31 16:52:27 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:52:27.522998 | orchestrator | 2025-05-31 16:52:27 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:52:30.571121 | orchestrator | 2025-05-31 16:52:30 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:52:30.571266 | orchestrator | 2025-05-31 16:52:30 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:52:33.617207 | orchestrator | 2025-05-31 16:52:33 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:52:33.618165 | orchestrator | 2025-05-31 16:52:33 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:52:36.669226 | orchestrator | 2025-05-31 16:52:36 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:52:36.669319 | orchestrator | 2025-05-31 16:52:36 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:52:39.719541 | orchestrator | 2025-05-31 16:52:39 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:52:39.719670 | orchestrator | 2025-05-31 16:52:39 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:52:42.771029 | orchestrator | 2025-05-31 16:52:42 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:52:42.771210 | orchestrator | 2025-05-31 16:52:42 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:52:45.822777 | orchestrator | 2025-05-31 16:52:45 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:52:45.822860 | orchestrator | 2025-05-31 16:52:45 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:52:48.864100 | orchestrator | 2025-05-31 16:52:48 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:52:48.864263 | orchestrator | 2025-05-31 16:52:48 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:52:51.910735 | orchestrator | 2025-05-31 16:52:51 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:52:51.910834 | orchestrator | 2025-05-31 16:52:51 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:52:54.958855 | orchestrator | 2025-05-31 16:52:54 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:52:54.959839 | orchestrator | 2025-05-31 16:52:54 | INFO  | Task 07254d85-4bd5-4e01-93dd-d53f4d030ac2 is in state STARTED 2025-05-31 16:52:54.959853 | orchestrator | 2025-05-31 16:52:54 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:52:58.015732 | orchestrator | 2025-05-31 16:52:58 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:52:58.015946 | orchestrator | 2025-05-31 16:52:58 | INFO  | Task 07254d85-4bd5-4e01-93dd-d53f4d030ac2 is in state STARTED 2025-05-31 16:52:58.015968 | orchestrator | 2025-05-31 16:52:58 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:53:01.076256 | orchestrator | 2025-05-31 16:53:01 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:53:01.078469 | orchestrator | 2025-05-31 16:53:01 | INFO  | Task 07254d85-4bd5-4e01-93dd-d53f4d030ac2 is in state STARTED 2025-05-31 16:53:01.078530 | orchestrator | 2025-05-31 16:53:01 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:53:04.138215 | orchestrator | 2025-05-31 16:53:04 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:53:04.138468 | orchestrator | 2025-05-31 16:53:04 | INFO  | Task 07254d85-4bd5-4e01-93dd-d53f4d030ac2 is in state STARTED 2025-05-31 16:53:04.138952 | orchestrator | 2025-05-31 16:53:04 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:53:07.188901 | orchestrator | 2025-05-31 16:53:07 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:53:07.189432 | orchestrator | 2025-05-31 16:53:07 | INFO  | Task 07254d85-4bd5-4e01-93dd-d53f4d030ac2 is in state SUCCESS 2025-05-31 16:53:07.189470 | orchestrator | 2025-05-31 16:53:07 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:53:10.234725 | orchestrator | 2025-05-31 16:53:10 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:53:10.234784 | orchestrator | 2025-05-31 16:53:10 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:53:13.279646 | orchestrator | 2025-05-31 16:53:13 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:53:13.279738 | orchestrator | 2025-05-31 16:53:13 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:53:16.327299 | orchestrator | 2025-05-31 16:53:16 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:53:16.327407 | orchestrator | 2025-05-31 16:53:16 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:53:19.376080 | orchestrator | 2025-05-31 16:53:19 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:53:19.376210 | orchestrator | 2025-05-31 16:53:19 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:53:22.428480 | orchestrator | 2025-05-31 16:53:22 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:53:22.428561 | orchestrator | 2025-05-31 16:53:22 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:53:25.477764 | orchestrator | 2025-05-31 16:53:25 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:53:25.477841 | orchestrator | 2025-05-31 16:53:25 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:53:28.525520 | orchestrator | 2025-05-31 16:53:28 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:53:28.525609 | orchestrator | 2025-05-31 16:53:28 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:53:31.569957 | orchestrator | 2025-05-31 16:53:31 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:53:31.570121 | orchestrator | 2025-05-31 16:53:31 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:53:34.618337 | orchestrator | 2025-05-31 16:53:34 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:53:34.618439 | orchestrator | 2025-05-31 16:53:34 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:53:37.670564 | orchestrator | 2025-05-31 16:53:37 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:53:37.670669 | orchestrator | 2025-05-31 16:53:37 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:53:40.723639 | orchestrator | 2025-05-31 16:53:40 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:53:40.723753 | orchestrator | 2025-05-31 16:53:40 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:53:43.768366 | orchestrator | 2025-05-31 16:53:43 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:53:43.768466 | orchestrator | 2025-05-31 16:53:43 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:53:46.824389 | orchestrator | 2025-05-31 16:53:46 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:53:46.824501 | orchestrator | 2025-05-31 16:53:46 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:53:49.867710 | orchestrator | 2025-05-31 16:53:49 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:53:49.867794 | orchestrator | 2025-05-31 16:53:49 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:53:52.916957 | orchestrator | 2025-05-31 16:53:52 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:53:52.917059 | orchestrator | 2025-05-31 16:53:52 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:53:55.962648 | orchestrator | 2025-05-31 16:53:55 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:53:55.962754 | orchestrator | 2025-05-31 16:53:55 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:53:59.006274 | orchestrator | 2025-05-31 16:53:59 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:53:59.006364 | orchestrator | 2025-05-31 16:53:59 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:54:02.042763 | orchestrator | 2025-05-31 16:54:02 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:54:02.042868 | orchestrator | 2025-05-31 16:54:02 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:54:05.096396 | orchestrator | 2025-05-31 16:54:05 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:54:05.097293 | orchestrator | 2025-05-31 16:54:05 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:54:08.141966 | orchestrator | 2025-05-31 16:54:08 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:54:08.142129 | orchestrator | 2025-05-31 16:54:08 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:54:11.206336 | orchestrator | 2025-05-31 16:54:11 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:54:11.206435 | orchestrator | 2025-05-31 16:54:11 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:54:14.255883 | orchestrator | 2025-05-31 16:54:14 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:54:14.256004 | orchestrator | 2025-05-31 16:54:14 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:54:17.298143 | orchestrator | 2025-05-31 16:54:17 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:54:17.298218 | orchestrator | 2025-05-31 16:54:17 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:54:20.340601 | orchestrator | 2025-05-31 16:54:20 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:54:20.340689 | orchestrator | 2025-05-31 16:54:20 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:54:23.388968 | orchestrator | 2025-05-31 16:54:23 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:54:23.389074 | orchestrator | 2025-05-31 16:54:23 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:54:26.445885 | orchestrator | 2025-05-31 16:54:26 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:54:26.446986 | orchestrator | 2025-05-31 16:54:26 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:54:29.500037 | orchestrator | 2025-05-31 16:54:29 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:54:29.500143 | orchestrator | 2025-05-31 16:54:29 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:54:32.556424 | orchestrator | 2025-05-31 16:54:32 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:54:32.556525 | orchestrator | 2025-05-31 16:54:32 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:54:35.606557 | orchestrator | 2025-05-31 16:54:35 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:54:35.606657 | orchestrator | 2025-05-31 16:54:35 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:54:38.657499 | orchestrator | 2025-05-31 16:54:38 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:54:38.657595 | orchestrator | 2025-05-31 16:54:38 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:54:41.708695 | orchestrator | 2025-05-31 16:54:41 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:54:41.708798 | orchestrator | 2025-05-31 16:54:41 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:54:44.756867 | orchestrator | 2025-05-31 16:54:44 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:54:44.756969 | orchestrator | 2025-05-31 16:54:44 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:54:47.807145 | orchestrator | 2025-05-31 16:54:47 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:54:47.807236 | orchestrator | 2025-05-31 16:54:47 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:54:50.849169 | orchestrator | 2025-05-31 16:54:50 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:54:50.849360 | orchestrator | 2025-05-31 16:54:50 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:54:53.892993 | orchestrator | 2025-05-31 16:54:53 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:54:53.893076 | orchestrator | 2025-05-31 16:54:53 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:54:56.940822 | orchestrator | 2025-05-31 16:54:56 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:54:56.940929 | orchestrator | 2025-05-31 16:54:56 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:54:59.992207 | orchestrator | 2025-05-31 16:54:59 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:54:59.992330 | orchestrator | 2025-05-31 16:54:59 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:55:03.053077 | orchestrator | 2025-05-31 16:55:03 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:55:03.053180 | orchestrator | 2025-05-31 16:55:03 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:55:06.106642 | orchestrator | 2025-05-31 16:55:06 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:55:06.106769 | orchestrator | 2025-05-31 16:55:06 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:55:09.155932 | orchestrator | 2025-05-31 16:55:09 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:55:09.156055 | orchestrator | 2025-05-31 16:55:09 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:55:12.201430 | orchestrator | 2025-05-31 16:55:12 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:55:12.201575 | orchestrator | 2025-05-31 16:55:12 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:55:15.252171 | orchestrator | 2025-05-31 16:55:15 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:55:15.252270 | orchestrator | 2025-05-31 16:55:15 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:55:18.302433 | orchestrator | 2025-05-31 16:55:18 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:55:18.302563 | orchestrator | 2025-05-31 16:55:18 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:55:21.349891 | orchestrator | 2025-05-31 16:55:21 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:55:21.350001 | orchestrator | 2025-05-31 16:55:21 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:55:24.393716 | orchestrator | 2025-05-31 16:55:24 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:55:24.393873 | orchestrator | 2025-05-31 16:55:24 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:55:27.445791 | orchestrator | 2025-05-31 16:55:27 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:55:27.445903 | orchestrator | 2025-05-31 16:55:27 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:55:30.507551 | orchestrator | 2025-05-31 16:55:30 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:55:30.507691 | orchestrator | 2025-05-31 16:55:30 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:55:33.555294 | orchestrator | 2025-05-31 16:55:33 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:55:33.555457 | orchestrator | 2025-05-31 16:55:33 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:55:36.606454 | orchestrator | 2025-05-31 16:55:36 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:55:36.607310 | orchestrator | 2025-05-31 16:55:36 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:55:39.655008 | orchestrator | 2025-05-31 16:55:39 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:55:39.655144 | orchestrator | 2025-05-31 16:55:39 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:55:42.703326 | orchestrator | 2025-05-31 16:55:42 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:55:42.703499 | orchestrator | 2025-05-31 16:55:42 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:55:45.753433 | orchestrator | 2025-05-31 16:55:45 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:55:45.753535 | orchestrator | 2025-05-31 16:55:45 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:55:48.804969 | orchestrator | 2025-05-31 16:55:48 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:55:48.805054 | orchestrator | 2025-05-31 16:55:48 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:55:51.860960 | orchestrator | 2025-05-31 16:55:51 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:55:51.861055 | orchestrator | 2025-05-31 16:55:51 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:55:54.921662 | orchestrator | 2025-05-31 16:55:54 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:55:54.921813 | orchestrator | 2025-05-31 16:55:54 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:55:57.972740 | orchestrator | 2025-05-31 16:55:57 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:55:57.972848 | orchestrator | 2025-05-31 16:55:57 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:56:01.024817 | orchestrator | 2025-05-31 16:56:01 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:56:01.024944 | orchestrator | 2025-05-31 16:56:01 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:56:04.072901 | orchestrator | 2025-05-31 16:56:04 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:56:04.072976 | orchestrator | 2025-05-31 16:56:04 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:56:07.119315 | orchestrator | 2025-05-31 16:56:07 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:56:07.119466 | orchestrator | 2025-05-31 16:56:07 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:56:10.165590 | orchestrator | 2025-05-31 16:56:10 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:56:10.165660 | orchestrator | 2025-05-31 16:56:10 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:56:13.220313 | orchestrator | 2025-05-31 16:56:13 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:56:13.220419 | orchestrator | 2025-05-31 16:56:13 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:56:16.276018 | orchestrator | 2025-05-31 16:56:16 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:56:16.276113 | orchestrator | 2025-05-31 16:56:16 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:56:19.337375 | orchestrator | 2025-05-31 16:56:19 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:56:19.337518 | orchestrator | 2025-05-31 16:56:19 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:56:22.397662 | orchestrator | 2025-05-31 16:56:22 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:56:22.397806 | orchestrator | 2025-05-31 16:56:22 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:56:25.443046 | orchestrator | 2025-05-31 16:56:25 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:56:25.443147 | orchestrator | 2025-05-31 16:56:25 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:56:28.485979 | orchestrator | 2025-05-31 16:56:28 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:56:28.486119 | orchestrator | 2025-05-31 16:56:28 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:56:31.542364 | orchestrator | 2025-05-31 16:56:31 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:56:31.542499 | orchestrator | 2025-05-31 16:56:31 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:56:34.593831 | orchestrator | 2025-05-31 16:56:34 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:56:34.593932 | orchestrator | 2025-05-31 16:56:34 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:56:37.640223 | orchestrator | 2025-05-31 16:56:37 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:56:37.640330 | orchestrator | 2025-05-31 16:56:37 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:56:40.686739 | orchestrator | 2025-05-31 16:56:40 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:56:40.686838 | orchestrator | 2025-05-31 16:56:40 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:56:43.739550 | orchestrator | 2025-05-31 16:56:43 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:56:43.739645 | orchestrator | 2025-05-31 16:56:43 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:56:46.796364 | orchestrator | 2025-05-31 16:56:46 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:56:46.796487 | orchestrator | 2025-05-31 16:56:46 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:56:49.836114 | orchestrator | 2025-05-31 16:56:49 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:56:49.836215 | orchestrator | 2025-05-31 16:56:49 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:56:52.887946 | orchestrator | 2025-05-31 16:56:52 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:56:52.888278 | orchestrator | 2025-05-31 16:56:52 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:56:55.946972 | orchestrator | 2025-05-31 16:56:55 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:56:55.947076 | orchestrator | 2025-05-31 16:56:55 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:56:58.992897 | orchestrator | 2025-05-31 16:56:58 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:56:58.993003 | orchestrator | 2025-05-31 16:56:58 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:57:02.043251 | orchestrator | 2025-05-31 16:57:02 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:57:02.043352 | orchestrator | 2025-05-31 16:57:02 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:57:05.092853 | orchestrator | 2025-05-31 16:57:05 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:57:05.092972 | orchestrator | 2025-05-31 16:57:05 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:57:08.141698 | orchestrator | 2025-05-31 16:57:08 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:57:08.141815 | orchestrator | 2025-05-31 16:57:08 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:57:11.199560 | orchestrator | 2025-05-31 16:57:11 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:57:11.199666 | orchestrator | 2025-05-31 16:57:11 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:57:14.253708 | orchestrator | 2025-05-31 16:57:14 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:57:14.253784 | orchestrator | 2025-05-31 16:57:14 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:57:17.314886 | orchestrator | 2025-05-31 16:57:17 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:57:17.314992 | orchestrator | 2025-05-31 16:57:17 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:57:20.368497 | orchestrator | 2025-05-31 16:57:20 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:57:20.368610 | orchestrator | 2025-05-31 16:57:20 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:57:23.421767 | orchestrator | 2025-05-31 16:57:23 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:57:23.421875 | orchestrator | 2025-05-31 16:57:23 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:57:26.474811 | orchestrator | 2025-05-31 16:57:26 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:57:26.474916 | orchestrator | 2025-05-31 16:57:26 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:57:29.523102 | orchestrator | 2025-05-31 16:57:29 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:57:29.523232 | orchestrator | 2025-05-31 16:57:29 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:57:32.574916 | orchestrator | 2025-05-31 16:57:32 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:57:32.575039 | orchestrator | 2025-05-31 16:57:32 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:57:35.627975 | orchestrator | 2025-05-31 16:57:35 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:57:35.628093 | orchestrator | 2025-05-31 16:57:35 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:57:38.673848 | orchestrator | 2025-05-31 16:57:38 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:57:38.673949 | orchestrator | 2025-05-31 16:57:38 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:57:41.722546 | orchestrator | 2025-05-31 16:57:41 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:57:41.722655 | orchestrator | 2025-05-31 16:57:41 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:57:44.772923 | orchestrator | 2025-05-31 16:57:44 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:57:44.773009 | orchestrator | 2025-05-31 16:57:44 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:57:47.823485 | orchestrator | 2025-05-31 16:57:47 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:57:47.823621 | orchestrator | 2025-05-31 16:57:47 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:57:50.871327 | orchestrator | 2025-05-31 16:57:50 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:57:50.871424 | orchestrator | 2025-05-31 16:57:50 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:57:53.921881 | orchestrator | 2025-05-31 16:57:53 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:57:53.921989 | orchestrator | 2025-05-31 16:57:53 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:57:56.971631 | orchestrator | 2025-05-31 16:57:56 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:57:56.971731 | orchestrator | 2025-05-31 16:57:56 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:58:00.022311 | orchestrator | 2025-05-31 16:58:00 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:58:00.022416 | orchestrator | 2025-05-31 16:58:00 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:58:03.071661 | orchestrator | 2025-05-31 16:58:03 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:58:03.071764 | orchestrator | 2025-05-31 16:58:03 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:58:06.117349 | orchestrator | 2025-05-31 16:58:06 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:58:06.117419 | orchestrator | 2025-05-31 16:58:06 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:58:09.163020 | orchestrator | 2025-05-31 16:58:09 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:58:09.163089 | orchestrator | 2025-05-31 16:58:09 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:58:12.214726 | orchestrator | 2025-05-31 16:58:12 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:58:12.214828 | orchestrator | 2025-05-31 16:58:12 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:58:15.259142 | orchestrator | 2025-05-31 16:58:15 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:58:15.259238 | orchestrator | 2025-05-31 16:58:15 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:58:18.304630 | orchestrator | 2025-05-31 16:58:18 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:58:18.304755 | orchestrator | 2025-05-31 16:58:18 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:58:21.349512 | orchestrator | 2025-05-31 16:58:21 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:58:21.349667 | orchestrator | 2025-05-31 16:58:21 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:58:24.388439 | orchestrator | 2025-05-31 16:58:24 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:58:24.388596 | orchestrator | 2025-05-31 16:58:24 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:58:27.436172 | orchestrator | 2025-05-31 16:58:27 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:58:27.436270 | orchestrator | 2025-05-31 16:58:27 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:58:30.476451 | orchestrator | 2025-05-31 16:58:30 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:58:30.476605 | orchestrator | 2025-05-31 16:58:30 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:58:33.526454 | orchestrator | 2025-05-31 16:58:33 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:58:33.526552 | orchestrator | 2025-05-31 16:58:33 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:58:36.576221 | orchestrator | 2025-05-31 16:58:36 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:58:36.576322 | orchestrator | 2025-05-31 16:58:36 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:58:39.625385 | orchestrator | 2025-05-31 16:58:39 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:58:39.625485 | orchestrator | 2025-05-31 16:58:39 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:58:42.673067 | orchestrator | 2025-05-31 16:58:42 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:58:42.673174 | orchestrator | 2025-05-31 16:58:42 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:58:45.722260 | orchestrator | 2025-05-31 16:58:45 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:58:45.722343 | orchestrator | 2025-05-31 16:58:45 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:58:48.775297 | orchestrator | 2025-05-31 16:58:48 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:58:48.775380 | orchestrator | 2025-05-31 16:58:48 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:58:51.823932 | orchestrator | 2025-05-31 16:58:51 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:58:51.824029 | orchestrator | 2025-05-31 16:58:51 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:58:54.872244 | orchestrator | 2025-05-31 16:58:54 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:58:54.872353 | orchestrator | 2025-05-31 16:58:54 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:58:57.936015 | orchestrator | 2025-05-31 16:58:57 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:58:57.936112 | orchestrator | 2025-05-31 16:58:57 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:59:00.988132 | orchestrator | 2025-05-31 16:59:00 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:59:00.988240 | orchestrator | 2025-05-31 16:59:00 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:59:04.034694 | orchestrator | 2025-05-31 16:59:04 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:59:04.034799 | orchestrator | 2025-05-31 16:59:04 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:59:07.085946 | orchestrator | 2025-05-31 16:59:07 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:59:07.086142 | orchestrator | 2025-05-31 16:59:07 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:59:10.126754 | orchestrator | 2025-05-31 16:59:10 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:59:10.126855 | orchestrator | 2025-05-31 16:59:10 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:59:13.175060 | orchestrator | 2025-05-31 16:59:13 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:59:13.175160 | orchestrator | 2025-05-31 16:59:13 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:59:16.221402 | orchestrator | 2025-05-31 16:59:16 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:59:16.221531 | orchestrator | 2025-05-31 16:59:16 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:59:19.259975 | orchestrator | 2025-05-31 16:59:19 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:59:19.260092 | orchestrator | 2025-05-31 16:59:19 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:59:22.307966 | orchestrator | 2025-05-31 16:59:22 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:59:22.308080 | orchestrator | 2025-05-31 16:59:22 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:59:25.355595 | orchestrator | 2025-05-31 16:59:25 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:59:25.355742 | orchestrator | 2025-05-31 16:59:25 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:59:28.405678 | orchestrator | 2025-05-31 16:59:28 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:59:28.405764 | orchestrator | 2025-05-31 16:59:28 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:59:31.449138 | orchestrator | 2025-05-31 16:59:31 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:59:31.449242 | orchestrator | 2025-05-31 16:59:31 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:59:34.512208 | orchestrator | 2025-05-31 16:59:34 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:59:34.512314 | orchestrator | 2025-05-31 16:59:34 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:59:37.561707 | orchestrator | 2025-05-31 16:59:37 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:59:37.561814 | orchestrator | 2025-05-31 16:59:37 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:59:40.610207 | orchestrator | 2025-05-31 16:59:40 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:59:40.610280 | orchestrator | 2025-05-31 16:59:40 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:59:43.662607 | orchestrator | 2025-05-31 16:59:43 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:59:43.663357 | orchestrator | 2025-05-31 16:59:43 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:59:46.703138 | orchestrator | 2025-05-31 16:59:46 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:59:46.703271 | orchestrator | 2025-05-31 16:59:46 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:59:49.750296 | orchestrator | 2025-05-31 16:59:49 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:59:49.750417 | orchestrator | 2025-05-31 16:59:49 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:59:52.799954 | orchestrator | 2025-05-31 16:59:52 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:59:52.800049 | orchestrator | 2025-05-31 16:59:52 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:59:55.852802 | orchestrator | 2025-05-31 16:59:55 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:59:55.852904 | orchestrator | 2025-05-31 16:59:55 | INFO  | Wait 1 second(s) until the next check 2025-05-31 16:59:58.899657 | orchestrator | 2025-05-31 16:59:58 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 16:59:58.899757 | orchestrator | 2025-05-31 16:59:58 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:00:01.944903 | orchestrator | 2025-05-31 17:00:01 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:00:01.945008 | orchestrator | 2025-05-31 17:00:01 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:00:04.994184 | orchestrator | 2025-05-31 17:00:04 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:00:04.994262 | orchestrator | 2025-05-31 17:00:04 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:00:08.046878 | orchestrator | 2025-05-31 17:00:08 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:00:08.046990 | orchestrator | 2025-05-31 17:00:08 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:00:11.082752 | orchestrator | 2025-05-31 17:00:11 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:00:11.082853 | orchestrator | 2025-05-31 17:00:11 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:00:14.130830 | orchestrator | 2025-05-31 17:00:14 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:00:14.130928 | orchestrator | 2025-05-31 17:00:14 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:00:17.175819 | orchestrator | 2025-05-31 17:00:17 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:00:17.175930 | orchestrator | 2025-05-31 17:00:17 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:00:20.224624 | orchestrator | 2025-05-31 17:00:20 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:00:20.224761 | orchestrator | 2025-05-31 17:00:20 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:00:23.281350 | orchestrator | 2025-05-31 17:00:23 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:00:23.281447 | orchestrator | 2025-05-31 17:00:23 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:00:26.335195 | orchestrator | 2025-05-31 17:00:26 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:00:26.335299 | orchestrator | 2025-05-31 17:00:26 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:00:29.386597 | orchestrator | 2025-05-31 17:00:29 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:00:29.386744 | orchestrator | 2025-05-31 17:00:29 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:00:32.435897 | orchestrator | 2025-05-31 17:00:32 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:00:32.436060 | orchestrator | 2025-05-31 17:00:32 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:00:35.480801 | orchestrator | 2025-05-31 17:00:35 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:00:35.480899 | orchestrator | 2025-05-31 17:00:35 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:00:38.529456 | orchestrator | 2025-05-31 17:00:38 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:00:38.529565 | orchestrator | 2025-05-31 17:00:38 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:00:41.577569 | orchestrator | 2025-05-31 17:00:41 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:00:41.577714 | orchestrator | 2025-05-31 17:00:41 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:00:44.627891 | orchestrator | 2025-05-31 17:00:44 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:00:44.628950 | orchestrator | 2025-05-31 17:00:44 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:00:47.679811 | orchestrator | 2025-05-31 17:00:47 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:00:47.679917 | orchestrator | 2025-05-31 17:00:47 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:00:50.721099 | orchestrator | 2025-05-31 17:00:50 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:00:50.721201 | orchestrator | 2025-05-31 17:00:50 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:00:53.772639 | orchestrator | 2025-05-31 17:00:53 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:00:53.772767 | orchestrator | 2025-05-31 17:00:53 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:00:56.822735 | orchestrator | 2025-05-31 17:00:56 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:00:56.822841 | orchestrator | 2025-05-31 17:00:56 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:00:59.868366 | orchestrator | 2025-05-31 17:00:59 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:00:59.868461 | orchestrator | 2025-05-31 17:00:59 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:01:02.915075 | orchestrator | 2025-05-31 17:01:02 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:01:02.915165 | orchestrator | 2025-05-31 17:01:02 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:01:05.971019 | orchestrator | 2025-05-31 17:01:05 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:01:05.971108 | orchestrator | 2025-05-31 17:01:05 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:01:09.024703 | orchestrator | 2025-05-31 17:01:09 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:01:09.024789 | orchestrator | 2025-05-31 17:01:09 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:01:12.071090 | orchestrator | 2025-05-31 17:01:12 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:01:12.071191 | orchestrator | 2025-05-31 17:01:12 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:01:15.120397 | orchestrator | 2025-05-31 17:01:15 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:01:15.120518 | orchestrator | 2025-05-31 17:01:15 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:01:18.180776 | orchestrator | 2025-05-31 17:01:18 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:01:18.180911 | orchestrator | 2025-05-31 17:01:18 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:01:21.233288 | orchestrator | 2025-05-31 17:01:21 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:01:21.233388 | orchestrator | 2025-05-31 17:01:21 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:01:24.288117 | orchestrator | 2025-05-31 17:01:24 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:01:24.288224 | orchestrator | 2025-05-31 17:01:24 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:01:27.334878 | orchestrator | 2025-05-31 17:01:27 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:01:27.334991 | orchestrator | 2025-05-31 17:01:27 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:01:30.387559 | orchestrator | 2025-05-31 17:01:30 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:01:30.387693 | orchestrator | 2025-05-31 17:01:30 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:01:33.436241 | orchestrator | 2025-05-31 17:01:33 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:01:33.436333 | orchestrator | 2025-05-31 17:01:33 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:01:36.495047 | orchestrator | 2025-05-31 17:01:36 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:01:36.496846 | orchestrator | 2025-05-31 17:01:36 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:01:39.544292 | orchestrator | 2025-05-31 17:01:39 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:01:39.544391 | orchestrator | 2025-05-31 17:01:39 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:01:42.594629 | orchestrator | 2025-05-31 17:01:42 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:01:42.594766 | orchestrator | 2025-05-31 17:01:42 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:01:45.651949 | orchestrator | 2025-05-31 17:01:45 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:01:45.652046 | orchestrator | 2025-05-31 17:01:45 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:01:48.694282 | orchestrator | 2025-05-31 17:01:48 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:01:48.694382 | orchestrator | 2025-05-31 17:01:48 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:01:51.740574 | orchestrator | 2025-05-31 17:01:51 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:01:51.740672 | orchestrator | 2025-05-31 17:01:51 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:01:54.788327 | orchestrator | 2025-05-31 17:01:54 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:01:54.788432 | orchestrator | 2025-05-31 17:01:54 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:01:57.841598 | orchestrator | 2025-05-31 17:01:57 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:01:57.841713 | orchestrator | 2025-05-31 17:01:57 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:02:00.891374 | orchestrator | 2025-05-31 17:02:00 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:02:00.892367 | orchestrator | 2025-05-31 17:02:00 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:02:03.932769 | orchestrator | 2025-05-31 17:02:03 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:02:03.932901 | orchestrator | 2025-05-31 17:02:03 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:02:06.976396 | orchestrator | 2025-05-31 17:02:06 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:02:06.976500 | orchestrator | 2025-05-31 17:02:06 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:02:10.029283 | orchestrator | 2025-05-31 17:02:10 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:02:10.029382 | orchestrator | 2025-05-31 17:02:10 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:02:13.084503 | orchestrator | 2025-05-31 17:02:13 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:02:13.084620 | orchestrator | 2025-05-31 17:02:13 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:02:16.134722 | orchestrator | 2025-05-31 17:02:16 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:02:16.134824 | orchestrator | 2025-05-31 17:02:16 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:02:19.179864 | orchestrator | 2025-05-31 17:02:19 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:02:19.179964 | orchestrator | 2025-05-31 17:02:19 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:02:22.225767 | orchestrator | 2025-05-31 17:02:22 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:02:22.225868 | orchestrator | 2025-05-31 17:02:22 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:02:25.274656 | orchestrator | 2025-05-31 17:02:25 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:02:25.274911 | orchestrator | 2025-05-31 17:02:25 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:02:28.320357 | orchestrator | 2025-05-31 17:02:28 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:02:28.320467 | orchestrator | 2025-05-31 17:02:28 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:02:31.367092 | orchestrator | 2025-05-31 17:02:31 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:02:31.367202 | orchestrator | 2025-05-31 17:02:31 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:02:34.409112 | orchestrator | 2025-05-31 17:02:34 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:02:34.409209 | orchestrator | 2025-05-31 17:02:34 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:02:37.458266 | orchestrator | 2025-05-31 17:02:37 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:02:37.458357 | orchestrator | 2025-05-31 17:02:37 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:02:40.508553 | orchestrator | 2025-05-31 17:02:40 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:02:40.508675 | orchestrator | 2025-05-31 17:02:40 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:02:43.559956 | orchestrator | 2025-05-31 17:02:43 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:02:43.560879 | orchestrator | 2025-05-31 17:02:43 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:02:46.603410 | orchestrator | 2025-05-31 17:02:46 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:02:46.603513 | orchestrator | 2025-05-31 17:02:46 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:02:49.643788 | orchestrator | 2025-05-31 17:02:49 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:02:49.643924 | orchestrator | 2025-05-31 17:02:49 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:02:52.699077 | orchestrator | 2025-05-31 17:02:52 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:02:52.699177 | orchestrator | 2025-05-31 17:02:52 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:02:55.769788 | orchestrator | 2025-05-31 17:02:55 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:02:55.771190 | orchestrator | 2025-05-31 17:02:55 | INFO  | Task a9aab432-561f-4a33-8cfd-65d46a6b0fe4 is in state STARTED 2025-05-31 17:02:55.771338 | orchestrator | 2025-05-31 17:02:55 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:02:58.825637 | orchestrator | 2025-05-31 17:02:58 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:02:58.827460 | orchestrator | 2025-05-31 17:02:58 | INFO  | Task a9aab432-561f-4a33-8cfd-65d46a6b0fe4 is in state STARTED 2025-05-31 17:02:58.827507 | orchestrator | 2025-05-31 17:02:58 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:03:01.885945 | orchestrator | 2025-05-31 17:03:01 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:03:01.889706 | orchestrator | 2025-05-31 17:03:01 | INFO  | Task a9aab432-561f-4a33-8cfd-65d46a6b0fe4 is in state STARTED 2025-05-31 17:03:01.889775 | orchestrator | 2025-05-31 17:03:01 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:03:04.952225 | orchestrator | 2025-05-31 17:03:04 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:03:04.952992 | orchestrator | 2025-05-31 17:03:04 | INFO  | Task a9aab432-561f-4a33-8cfd-65d46a6b0fe4 is in state STARTED 2025-05-31 17:03:04.953058 | orchestrator | 2025-05-31 17:03:04 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:03:08.001471 | orchestrator | 2025-05-31 17:03:07 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:03:08.001864 | orchestrator | 2025-05-31 17:03:07 | INFO  | Task a9aab432-561f-4a33-8cfd-65d46a6b0fe4 is in state SUCCESS 2025-05-31 17:03:08.001895 | orchestrator | 2025-05-31 17:03:07 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:03:11.051204 | orchestrator | 2025-05-31 17:03:11 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:03:11.051307 | orchestrator | 2025-05-31 17:03:11 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:03:14.098193 | orchestrator | 2025-05-31 17:03:14 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:03:14.098289 | orchestrator | 2025-05-31 17:03:14 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:03:17.147580 | orchestrator | 2025-05-31 17:03:17 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:03:17.147682 | orchestrator | 2025-05-31 17:03:17 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:03:20.198850 | orchestrator | 2025-05-31 17:03:20 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:03:20.198965 | orchestrator | 2025-05-31 17:03:20 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:03:23.250526 | orchestrator | 2025-05-31 17:03:23 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:03:23.250622 | orchestrator | 2025-05-31 17:03:23 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:03:26.299163 | orchestrator | 2025-05-31 17:03:26 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:03:26.299261 | orchestrator | 2025-05-31 17:03:26 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:03:29.348833 | orchestrator | 2025-05-31 17:03:29 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:03:29.348944 | orchestrator | 2025-05-31 17:03:29 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:03:32.397955 | orchestrator | 2025-05-31 17:03:32 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:03:32.398100 | orchestrator | 2025-05-31 17:03:32 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:03:35.446813 | orchestrator | 2025-05-31 17:03:35 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:03:35.446908 | orchestrator | 2025-05-31 17:03:35 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:03:38.494665 | orchestrator | 2025-05-31 17:03:38 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:03:38.494791 | orchestrator | 2025-05-31 17:03:38 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:03:41.547466 | orchestrator | 2025-05-31 17:03:41 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:03:41.547570 | orchestrator | 2025-05-31 17:03:41 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:03:44.594698 | orchestrator | 2025-05-31 17:03:44 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:03:44.594852 | orchestrator | 2025-05-31 17:03:44 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:03:47.638512 | orchestrator | 2025-05-31 17:03:47 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:03:47.638607 | orchestrator | 2025-05-31 17:03:47 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:03:50.679257 | orchestrator | 2025-05-31 17:03:50 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:03:50.679360 | orchestrator | 2025-05-31 17:03:50 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:03:53.730702 | orchestrator | 2025-05-31 17:03:53 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:03:53.730831 | orchestrator | 2025-05-31 17:03:53 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:03:56.780139 | orchestrator | 2025-05-31 17:03:56 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:03:56.780248 | orchestrator | 2025-05-31 17:03:56 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:03:59.827368 | orchestrator | 2025-05-31 17:03:59 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:03:59.827482 | orchestrator | 2025-05-31 17:03:59 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:04:02.880967 | orchestrator | 2025-05-31 17:04:02 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:04:02.881072 | orchestrator | 2025-05-31 17:04:02 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:04:05.924977 | orchestrator | 2025-05-31 17:04:05 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:04:05.925075 | orchestrator | 2025-05-31 17:04:05 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:04:08.978541 | orchestrator | 2025-05-31 17:04:08 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:04:08.978646 | orchestrator | 2025-05-31 17:04:08 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:04:12.031814 | orchestrator | 2025-05-31 17:04:12 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:04:12.031969 | orchestrator | 2025-05-31 17:04:12 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:04:15.077425 | orchestrator | 2025-05-31 17:04:15 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:04:15.077652 | orchestrator | 2025-05-31 17:04:15 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:04:18.124716 | orchestrator | 2025-05-31 17:04:18 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:04:18.124871 | orchestrator | 2025-05-31 17:04:18 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:04:21.180876 | orchestrator | 2025-05-31 17:04:21 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:04:21.180972 | orchestrator | 2025-05-31 17:04:21 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:04:24.225568 | orchestrator | 2025-05-31 17:04:24 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:04:24.225672 | orchestrator | 2025-05-31 17:04:24 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:04:27.267462 | orchestrator | 2025-05-31 17:04:27 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:04:27.267562 | orchestrator | 2025-05-31 17:04:27 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:04:30.313486 | orchestrator | 2025-05-31 17:04:30 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:04:30.313608 | orchestrator | 2025-05-31 17:04:30 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:04:33.355376 | orchestrator | 2025-05-31 17:04:33 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:04:33.355479 | orchestrator | 2025-05-31 17:04:33 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:04:36.409370 | orchestrator | 2025-05-31 17:04:36 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:04:36.409474 | orchestrator | 2025-05-31 17:04:36 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:04:39.453139 | orchestrator | 2025-05-31 17:04:39 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:04:39.453240 | orchestrator | 2025-05-31 17:04:39 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:04:42.502493 | orchestrator | 2025-05-31 17:04:42 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:04:42.502602 | orchestrator | 2025-05-31 17:04:42 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:04:45.551284 | orchestrator | 2025-05-31 17:04:45 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:04:45.551384 | orchestrator | 2025-05-31 17:04:45 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:04:48.598296 | orchestrator | 2025-05-31 17:04:48 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:04:48.598401 | orchestrator | 2025-05-31 17:04:48 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:04:51.643491 | orchestrator | 2025-05-31 17:04:51 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:04:51.643589 | orchestrator | 2025-05-31 17:04:51 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:04:54.690265 | orchestrator | 2025-05-31 17:04:54 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:04:54.690409 | orchestrator | 2025-05-31 17:04:54 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:04:57.741093 | orchestrator | 2025-05-31 17:04:57 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:04:57.741236 | orchestrator | 2025-05-31 17:04:57 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:05:00.781226 | orchestrator | 2025-05-31 17:05:00 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:05:00.781332 | orchestrator | 2025-05-31 17:05:00 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:05:03.831385 | orchestrator | 2025-05-31 17:05:03 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:05:03.831494 | orchestrator | 2025-05-31 17:05:03 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:05:06.877933 | orchestrator | 2025-05-31 17:05:06 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:05:06.878085 | orchestrator | 2025-05-31 17:05:06 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:05:09.922211 | orchestrator | 2025-05-31 17:05:09 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:05:09.922315 | orchestrator | 2025-05-31 17:05:09 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:05:12.970612 | orchestrator | 2025-05-31 17:05:12 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:05:12.970727 | orchestrator | 2025-05-31 17:05:12 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:05:16.022299 | orchestrator | 2025-05-31 17:05:16 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:05:16.022397 | orchestrator | 2025-05-31 17:05:16 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:05:19.075022 | orchestrator | 2025-05-31 17:05:19 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:05:19.075140 | orchestrator | 2025-05-31 17:05:19 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:05:22.134987 | orchestrator | 2025-05-31 17:05:22 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:05:22.135090 | orchestrator | 2025-05-31 17:05:22 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:05:25.186587 | orchestrator | 2025-05-31 17:05:25 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:05:25.186701 | orchestrator | 2025-05-31 17:05:25 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:05:28.233607 | orchestrator | 2025-05-31 17:05:28 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:05:28.233711 | orchestrator | 2025-05-31 17:05:28 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:05:31.279538 | orchestrator | 2025-05-31 17:05:31 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:05:31.279662 | orchestrator | 2025-05-31 17:05:31 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:05:34.321837 | orchestrator | 2025-05-31 17:05:34 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:05:34.322010 | orchestrator | 2025-05-31 17:05:34 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:05:37.368956 | orchestrator | 2025-05-31 17:05:37 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:05:37.369066 | orchestrator | 2025-05-31 17:05:37 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:05:40.414478 | orchestrator | 2025-05-31 17:05:40 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:05:40.414606 | orchestrator | 2025-05-31 17:05:40 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:05:43.460550 | orchestrator | 2025-05-31 17:05:43 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:05:43.460688 | orchestrator | 2025-05-31 17:05:43 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:05:46.506206 | orchestrator | 2025-05-31 17:05:46 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:05:46.506282 | orchestrator | 2025-05-31 17:05:46 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:05:49.558098 | orchestrator | 2025-05-31 17:05:49 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:05:49.558207 | orchestrator | 2025-05-31 17:05:49 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:05:52.606745 | orchestrator | 2025-05-31 17:05:52 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:05:52.606932 | orchestrator | 2025-05-31 17:05:52 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:05:55.652648 | orchestrator | 2025-05-31 17:05:55 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:05:55.652808 | orchestrator | 2025-05-31 17:05:55 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:05:58.694980 | orchestrator | 2025-05-31 17:05:58 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:05:58.695084 | orchestrator | 2025-05-31 17:05:58 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:06:01.740104 | orchestrator | 2025-05-31 17:06:01 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:06:01.740205 | orchestrator | 2025-05-31 17:06:01 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:06:04.789218 | orchestrator | 2025-05-31 17:06:04 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:06:04.789329 | orchestrator | 2025-05-31 17:06:04 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:06:07.839798 | orchestrator | 2025-05-31 17:06:07 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:06:07.839971 | orchestrator | 2025-05-31 17:06:07 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:06:10.890463 | orchestrator | 2025-05-31 17:06:10 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:06:10.890569 | orchestrator | 2025-05-31 17:06:10 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:06:13.944378 | orchestrator | 2025-05-31 17:06:13 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:06:13.944478 | orchestrator | 2025-05-31 17:06:13 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:06:16.999547 | orchestrator | 2025-05-31 17:06:16 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:06:16.999627 | orchestrator | 2025-05-31 17:06:16 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:06:20.060441 | orchestrator | 2025-05-31 17:06:20 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:06:20.060541 | orchestrator | 2025-05-31 17:06:20 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:06:23.109147 | orchestrator | 2025-05-31 17:06:23 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:06:23.109254 | orchestrator | 2025-05-31 17:06:23 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:06:26.154240 | orchestrator | 2025-05-31 17:06:26 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:06:26.154324 | orchestrator | 2025-05-31 17:06:26 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:06:29.200557 | orchestrator | 2025-05-31 17:06:29 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:06:29.200698 | orchestrator | 2025-05-31 17:06:29 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:06:32.249506 | orchestrator | 2025-05-31 17:06:32 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:06:32.249614 | orchestrator | 2025-05-31 17:06:32 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:06:35.304059 | orchestrator | 2025-05-31 17:06:35 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:06:35.304175 | orchestrator | 2025-05-31 17:06:35 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:06:38.355297 | orchestrator | 2025-05-31 17:06:38 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:06:38.355410 | orchestrator | 2025-05-31 17:06:38 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:06:41.411567 | orchestrator | 2025-05-31 17:06:41 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:06:41.411696 | orchestrator | 2025-05-31 17:06:41 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:06:44.453876 | orchestrator | 2025-05-31 17:06:44 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:06:44.454097 | orchestrator | 2025-05-31 17:06:44 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:06:47.501473 | orchestrator | 2025-05-31 17:06:47 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:06:47.501582 | orchestrator | 2025-05-31 17:06:47 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:06:50.553184 | orchestrator | 2025-05-31 17:06:50 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:06:50.553288 | orchestrator | 2025-05-31 17:06:50 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:06:53.616818 | orchestrator | 2025-05-31 17:06:53 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:06:53.617009 | orchestrator | 2025-05-31 17:06:53 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:06:56.665317 | orchestrator | 2025-05-31 17:06:56 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:06:56.665423 | orchestrator | 2025-05-31 17:06:56 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:06:59.720212 | orchestrator | 2025-05-31 17:06:59 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:06:59.720315 | orchestrator | 2025-05-31 17:06:59 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:07:02.771804 | orchestrator | 2025-05-31 17:07:02 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:07:02.771908 | orchestrator | 2025-05-31 17:07:02 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:07:05.821487 | orchestrator | 2025-05-31 17:07:05 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:07:05.821591 | orchestrator | 2025-05-31 17:07:05 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:07:08.870862 | orchestrator | 2025-05-31 17:07:08 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:07:08.871007 | orchestrator | 2025-05-31 17:07:08 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:07:11.918165 | orchestrator | 2025-05-31 17:07:11 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:07:11.918271 | orchestrator | 2025-05-31 17:07:11 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:07:14.954833 | orchestrator | 2025-05-31 17:07:14 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:07:14.954924 | orchestrator | 2025-05-31 17:07:14 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:07:18.007799 | orchestrator | 2025-05-31 17:07:18 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:07:18.007898 | orchestrator | 2025-05-31 17:07:18 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:07:21.059276 | orchestrator | 2025-05-31 17:07:21 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:07:21.059376 | orchestrator | 2025-05-31 17:07:21 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:07:24.107682 | orchestrator | 2025-05-31 17:07:24 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:07:24.107791 | orchestrator | 2025-05-31 17:07:24 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:07:27.155710 | orchestrator | 2025-05-31 17:07:27 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:07:27.155819 | orchestrator | 2025-05-31 17:07:27 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:07:30.205771 | orchestrator | 2025-05-31 17:07:30 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:07:30.206161 | orchestrator | 2025-05-31 17:07:30 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:07:33.251463 | orchestrator | 2025-05-31 17:07:33 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:07:33.251588 | orchestrator | 2025-05-31 17:07:33 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:07:36.300127 | orchestrator | 2025-05-31 17:07:36 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:07:36.300203 | orchestrator | 2025-05-31 17:07:36 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:07:39.350778 | orchestrator | 2025-05-31 17:07:39 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:07:39.350880 | orchestrator | 2025-05-31 17:07:39 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:07:42.397075 | orchestrator | 2025-05-31 17:07:42 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:07:42.397176 | orchestrator | 2025-05-31 17:07:42 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:07:45.438470 | orchestrator | 2025-05-31 17:07:45 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:07:45.438575 | orchestrator | 2025-05-31 17:07:45 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:07:48.477520 | orchestrator | 2025-05-31 17:07:48 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:07:48.477616 | orchestrator | 2025-05-31 17:07:48 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:07:51.532478 | orchestrator | 2025-05-31 17:07:51 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:07:51.532613 | orchestrator | 2025-05-31 17:07:51 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:07:54.585633 | orchestrator | 2025-05-31 17:07:54 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:07:54.585737 | orchestrator | 2025-05-31 17:07:54 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:07:57.628584 | orchestrator | 2025-05-31 17:07:57 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:07:57.628693 | orchestrator | 2025-05-31 17:07:57 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:08:00.680764 | orchestrator | 2025-05-31 17:08:00 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:08:00.680875 | orchestrator | 2025-05-31 17:08:00 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:08:03.726630 | orchestrator | 2025-05-31 17:08:03 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:08:03.726733 | orchestrator | 2025-05-31 17:08:03 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:08:06.764407 | orchestrator | 2025-05-31 17:08:06 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:08:06.764515 | orchestrator | 2025-05-31 17:08:06 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:08:09.813239 | orchestrator | 2025-05-31 17:08:09 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:08:09.813371 | orchestrator | 2025-05-31 17:08:09 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:08:12.861849 | orchestrator | 2025-05-31 17:08:12 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:08:12.861920 | orchestrator | 2025-05-31 17:08:12 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:08:15.904563 | orchestrator | 2025-05-31 17:08:15 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:08:15.904667 | orchestrator | 2025-05-31 17:08:15 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:08:18.954540 | orchestrator | 2025-05-31 17:08:18 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:08:18.954639 | orchestrator | 2025-05-31 17:08:18 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:08:21.999445 | orchestrator | 2025-05-31 17:08:21 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:08:21.999552 | orchestrator | 2025-05-31 17:08:21 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:08:25.054677 | orchestrator | 2025-05-31 17:08:25 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:08:25.054779 | orchestrator | 2025-05-31 17:08:25 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:08:28.105195 | orchestrator | 2025-05-31 17:08:28 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:08:28.105320 | orchestrator | 2025-05-31 17:08:28 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:08:31.149360 | orchestrator | 2025-05-31 17:08:31 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:08:31.149475 | orchestrator | 2025-05-31 17:08:31 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:08:34.194889 | orchestrator | 2025-05-31 17:08:34 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:08:34.194994 | orchestrator | 2025-05-31 17:08:34 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:08:37.243946 | orchestrator | 2025-05-31 17:08:37 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:08:37.244083 | orchestrator | 2025-05-31 17:08:37 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:08:40.286708 | orchestrator | 2025-05-31 17:08:40 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:08:40.286824 | orchestrator | 2025-05-31 17:08:40 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:08:43.331122 | orchestrator | 2025-05-31 17:08:43 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:08:43.331230 | orchestrator | 2025-05-31 17:08:43 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:08:46.373912 | orchestrator | 2025-05-31 17:08:46 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:08:46.374146 | orchestrator | 2025-05-31 17:08:46 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:08:49.410276 | orchestrator | 2025-05-31 17:08:49 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:08:49.410406 | orchestrator | 2025-05-31 17:08:49 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:08:52.455145 | orchestrator | 2025-05-31 17:08:52 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:08:52.455253 | orchestrator | 2025-05-31 17:08:52 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:08:55.506218 | orchestrator | 2025-05-31 17:08:55 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:08:55.506317 | orchestrator | 2025-05-31 17:08:55 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:08:58.556561 | orchestrator | 2025-05-31 17:08:58 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:08:58.556671 | orchestrator | 2025-05-31 17:08:58 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:09:01.599618 | orchestrator | 2025-05-31 17:09:01 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:09:01.599719 | orchestrator | 2025-05-31 17:09:01 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:09:04.652903 | orchestrator | 2025-05-31 17:09:04 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:09:04.653012 | orchestrator | 2025-05-31 17:09:04 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:09:07.704286 | orchestrator | 2025-05-31 17:09:07 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:09:07.704386 | orchestrator | 2025-05-31 17:09:07 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:09:10.749811 | orchestrator | 2025-05-31 17:09:10 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:09:10.750105 | orchestrator | 2025-05-31 17:09:10 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:09:13.800241 | orchestrator | 2025-05-31 17:09:13 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:09:13.800344 | orchestrator | 2025-05-31 17:09:13 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:09:16.846407 | orchestrator | 2025-05-31 17:09:16 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:09:16.846522 | orchestrator | 2025-05-31 17:09:16 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:09:19.894431 | orchestrator | 2025-05-31 17:09:19 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:09:19.894534 | orchestrator | 2025-05-31 17:09:19 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:09:22.943378 | orchestrator | 2025-05-31 17:09:22 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:09:22.943486 | orchestrator | 2025-05-31 17:09:22 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:09:25.984823 | orchestrator | 2025-05-31 17:09:25 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:09:25.984923 | orchestrator | 2025-05-31 17:09:25 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:09:29.032554 | orchestrator | 2025-05-31 17:09:29 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:09:29.032666 | orchestrator | 2025-05-31 17:09:29 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:09:32.072525 | orchestrator | 2025-05-31 17:09:32 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:09:32.072666 | orchestrator | 2025-05-31 17:09:32 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:09:35.124245 | orchestrator | 2025-05-31 17:09:35 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:09:35.124384 | orchestrator | 2025-05-31 17:09:35 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:09:38.174434 | orchestrator | 2025-05-31 17:09:38 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:09:38.174538 | orchestrator | 2025-05-31 17:09:38 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:09:41.221650 | orchestrator | 2025-05-31 17:09:41 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:09:41.221758 | orchestrator | 2025-05-31 17:09:41 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:09:44.270794 | orchestrator | 2025-05-31 17:09:44 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:09:44.270897 | orchestrator | 2025-05-31 17:09:44 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:09:47.314995 | orchestrator | 2025-05-31 17:09:47 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:09:47.315183 | orchestrator | 2025-05-31 17:09:47 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:09:50.361342 | orchestrator | 2025-05-31 17:09:50 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:09:50.361441 | orchestrator | 2025-05-31 17:09:50 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:09:53.409350 | orchestrator | 2025-05-31 17:09:53 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:09:53.409454 | orchestrator | 2025-05-31 17:09:53 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:09:56.461997 | orchestrator | 2025-05-31 17:09:56 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:09:56.462189 | orchestrator | 2025-05-31 17:09:56 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:09:59.508276 | orchestrator | 2025-05-31 17:09:59 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:09:59.508385 | orchestrator | 2025-05-31 17:09:59 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:10:02.560777 | orchestrator | 2025-05-31 17:10:02 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:10:02.560892 | orchestrator | 2025-05-31 17:10:02 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:10:05.605347 | orchestrator | 2025-05-31 17:10:05 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:10:05.605507 | orchestrator | 2025-05-31 17:10:05 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:10:08.656500 | orchestrator | 2025-05-31 17:10:08 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:10:08.656603 | orchestrator | 2025-05-31 17:10:08 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:10:11.702758 | orchestrator | 2025-05-31 17:10:11 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:10:11.702871 | orchestrator | 2025-05-31 17:10:11 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:10:14.757325 | orchestrator | 2025-05-31 17:10:14 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:10:14.757426 | orchestrator | 2025-05-31 17:10:14 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:10:17.807258 | orchestrator | 2025-05-31 17:10:17 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:10:17.807436 | orchestrator | 2025-05-31 17:10:17 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:10:20.850533 | orchestrator | 2025-05-31 17:10:20 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:10:20.850664 | orchestrator | 2025-05-31 17:10:20 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:10:23.888419 | orchestrator | 2025-05-31 17:10:23 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:10:23.888573 | orchestrator | 2025-05-31 17:10:23 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:10:26.938416 | orchestrator | 2025-05-31 17:10:26 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:10:26.938550 | orchestrator | 2025-05-31 17:10:26 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:10:29.984189 | orchestrator | 2025-05-31 17:10:29 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:10:29.984327 | orchestrator | 2025-05-31 17:10:29 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:10:33.035159 | orchestrator | 2025-05-31 17:10:33 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:10:33.035271 | orchestrator | 2025-05-31 17:10:33 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:10:36.082608 | orchestrator | 2025-05-31 17:10:36 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:10:36.082711 | orchestrator | 2025-05-31 17:10:36 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:10:39.131687 | orchestrator | 2025-05-31 17:10:39 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:10:39.131789 | orchestrator | 2025-05-31 17:10:39 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:10:42.177479 | orchestrator | 2025-05-31 17:10:42 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:10:42.177587 | orchestrator | 2025-05-31 17:10:42 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:10:45.231046 | orchestrator | 2025-05-31 17:10:45 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:10:45.231272 | orchestrator | 2025-05-31 17:10:45 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:10:48.277355 | orchestrator | 2025-05-31 17:10:48 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:10:48.277468 | orchestrator | 2025-05-31 17:10:48 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:10:51.326955 | orchestrator | 2025-05-31 17:10:51 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:10:51.327091 | orchestrator | 2025-05-31 17:10:51 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:10:54.368214 | orchestrator | 2025-05-31 17:10:54 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:10:54.368377 | orchestrator | 2025-05-31 17:10:54 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:10:57.412698 | orchestrator | 2025-05-31 17:10:57 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:10:57.412835 | orchestrator | 2025-05-31 17:10:57 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:11:00.468411 | orchestrator | 2025-05-31 17:11:00 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:11:00.468521 | orchestrator | 2025-05-31 17:11:00 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:11:03.514745 | orchestrator | 2025-05-31 17:11:03 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:11:03.514886 | orchestrator | 2025-05-31 17:11:03 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:11:06.554653 | orchestrator | 2025-05-31 17:11:06 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:11:06.554777 | orchestrator | 2025-05-31 17:11:06 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:11:09.601027 | orchestrator | 2025-05-31 17:11:09 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:11:09.601166 | orchestrator | 2025-05-31 17:11:09 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:11:12.648915 | orchestrator | 2025-05-31 17:11:12 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:11:12.649092 | orchestrator | 2025-05-31 17:11:12 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:11:15.708152 | orchestrator | 2025-05-31 17:11:15 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:11:15.708257 | orchestrator | 2025-05-31 17:11:15 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:11:18.758272 | orchestrator | 2025-05-31 17:11:18 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:11:18.758399 | orchestrator | 2025-05-31 17:11:18 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:11:21.809196 | orchestrator | 2025-05-31 17:11:21 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:11:21.809302 | orchestrator | 2025-05-31 17:11:21 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:11:24.849045 | orchestrator | 2025-05-31 17:11:24 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:11:24.849187 | orchestrator | 2025-05-31 17:11:24 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:11:27.899662 | orchestrator | 2025-05-31 17:11:27 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:11:27.899764 | orchestrator | 2025-05-31 17:11:27 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:11:30.952062 | orchestrator | 2025-05-31 17:11:30 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:11:30.952202 | orchestrator | 2025-05-31 17:11:30 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:11:33.996291 | orchestrator | 2025-05-31 17:11:33 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:11:33.996402 | orchestrator | 2025-05-31 17:11:33 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:11:37.043000 | orchestrator | 2025-05-31 17:11:37 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:11:37.043108 | orchestrator | 2025-05-31 17:11:37 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:11:40.089212 | orchestrator | 2025-05-31 17:11:40 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:11:40.089325 | orchestrator | 2025-05-31 17:11:40 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:11:43.140241 | orchestrator | 2025-05-31 17:11:43 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:11:43.140377 | orchestrator | 2025-05-31 17:11:43 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:11:46.192087 | orchestrator | 2025-05-31 17:11:46 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:11:46.192285 | orchestrator | 2025-05-31 17:11:46 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:11:49.237940 | orchestrator | 2025-05-31 17:11:49 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:11:49.238111 | orchestrator | 2025-05-31 17:11:49 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:11:52.277513 | orchestrator | 2025-05-31 17:11:52 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:11:52.277618 | orchestrator | 2025-05-31 17:11:52 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:11:55.319194 | orchestrator | 2025-05-31 17:11:55 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:11:55.319304 | orchestrator | 2025-05-31 17:11:55 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:11:58.366918 | orchestrator | 2025-05-31 17:11:58 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:11:58.366988 | orchestrator | 2025-05-31 17:11:58 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:12:01.426213 | orchestrator | 2025-05-31 17:12:01 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:12:01.426320 | orchestrator | 2025-05-31 17:12:01 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:12:04.480574 | orchestrator | 2025-05-31 17:12:04 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:12:04.480688 | orchestrator | 2025-05-31 17:12:04 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:12:07.525983 | orchestrator | 2025-05-31 17:12:07 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:12:07.526139 | orchestrator | 2025-05-31 17:12:07 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:12:10.573472 | orchestrator | 2025-05-31 17:12:10 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:12:10.573574 | orchestrator | 2025-05-31 17:12:10 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:12:13.618698 | orchestrator | 2025-05-31 17:12:13 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:12:13.618805 | orchestrator | 2025-05-31 17:12:13 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:12:16.663152 | orchestrator | 2025-05-31 17:12:16 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:12:16.663308 | orchestrator | 2025-05-31 17:12:16 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:12:19.709074 | orchestrator | 2025-05-31 17:12:19 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:12:19.709221 | orchestrator | 2025-05-31 17:12:19 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:12:22.756632 | orchestrator | 2025-05-31 17:12:22 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:12:22.756803 | orchestrator | 2025-05-31 17:12:22 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:12:25.806009 | orchestrator | 2025-05-31 17:12:25 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:12:25.806116 | orchestrator | 2025-05-31 17:12:25 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:12:28.853346 | orchestrator | 2025-05-31 17:12:28 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:12:28.853447 | orchestrator | 2025-05-31 17:12:28 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:12:31.907063 | orchestrator | 2025-05-31 17:12:31 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:12:31.907171 | orchestrator | 2025-05-31 17:12:31 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:12:34.958087 | orchestrator | 2025-05-31 17:12:34 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:12:34.958239 | orchestrator | 2025-05-31 17:12:34 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:12:38.010752 | orchestrator | 2025-05-31 17:12:38 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:12:38.010859 | orchestrator | 2025-05-31 17:12:38 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:12:41.058314 | orchestrator | 2025-05-31 17:12:41 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:12:41.058471 | orchestrator | 2025-05-31 17:12:41 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:12:44.099528 | orchestrator | 2025-05-31 17:12:44 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:12:44.099630 | orchestrator | 2025-05-31 17:12:44 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:12:47.143816 | orchestrator | 2025-05-31 17:12:47 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:12:47.143906 | orchestrator | 2025-05-31 17:12:47 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:12:50.187670 | orchestrator | 2025-05-31 17:12:50 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:12:50.187774 | orchestrator | 2025-05-31 17:12:50 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:12:53.236324 | orchestrator | 2025-05-31 17:12:53 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:12:53.236427 | orchestrator | 2025-05-31 17:12:53 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:12:56.288523 | orchestrator | 2025-05-31 17:12:56 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:12:56.289516 | orchestrator | 2025-05-31 17:12:56 | INFO  | Task ebe016f9-9202-4faf-93e7-5846ec2c47c2 is in state STARTED 2025-05-31 17:12:56.289531 | orchestrator | 2025-05-31 17:12:56 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:12:59.346740 | orchestrator | 2025-05-31 17:12:59 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:12:59.348738 | orchestrator | 2025-05-31 17:12:59 | INFO  | Task ebe016f9-9202-4faf-93e7-5846ec2c47c2 is in state STARTED 2025-05-31 17:12:59.348802 | orchestrator | 2025-05-31 17:12:59 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:13:02.419909 | orchestrator | 2025-05-31 17:13:02 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:13:02.420395 | orchestrator | 2025-05-31 17:13:02 | INFO  | Task ebe016f9-9202-4faf-93e7-5846ec2c47c2 is in state STARTED 2025-05-31 17:13:02.420687 | orchestrator | 2025-05-31 17:13:02 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:13:05.481144 | orchestrator | 2025-05-31 17:13:05 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:13:05.481429 | orchestrator | 2025-05-31 17:13:05 | INFO  | Task ebe016f9-9202-4faf-93e7-5846ec2c47c2 is in state SUCCESS 2025-05-31 17:13:05.481454 | orchestrator | 2025-05-31 17:13:05 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:13:08.532956 | orchestrator | 2025-05-31 17:13:08 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:13:08.533058 | orchestrator | 2025-05-31 17:13:08 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:13:11.586280 | orchestrator | 2025-05-31 17:13:11 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:13:11.587059 | orchestrator | 2025-05-31 17:13:11 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:13:14.634994 | orchestrator | 2025-05-31 17:13:14 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:13:14.635112 | orchestrator | 2025-05-31 17:13:14 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:13:17.677318 | orchestrator | 2025-05-31 17:13:17 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:13:17.677425 | orchestrator | 2025-05-31 17:13:17 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:13:20.723343 | orchestrator | 2025-05-31 17:13:20 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:13:20.723475 | orchestrator | 2025-05-31 17:13:20 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:13:23.770600 | orchestrator | 2025-05-31 17:13:23 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:13:23.770707 | orchestrator | 2025-05-31 17:13:23 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:13:26.820806 | orchestrator | 2025-05-31 17:13:26 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:13:26.820914 | orchestrator | 2025-05-31 17:13:26 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:13:29.868807 | orchestrator | 2025-05-31 17:13:29 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:13:29.868993 | orchestrator | 2025-05-31 17:13:29 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:13:32.922344 | orchestrator | 2025-05-31 17:13:32 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:13:32.922446 | orchestrator | 2025-05-31 17:13:32 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:13:35.968291 | orchestrator | 2025-05-31 17:13:35 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:13:35.968404 | orchestrator | 2025-05-31 17:13:35 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:13:39.018770 | orchestrator | 2025-05-31 17:13:39 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:13:39.019110 | orchestrator | 2025-05-31 17:13:39 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:13:42.067377 | orchestrator | 2025-05-31 17:13:42 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:13:42.067490 | orchestrator | 2025-05-31 17:13:42 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:13:45.109852 | orchestrator | 2025-05-31 17:13:45 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:13:45.109959 | orchestrator | 2025-05-31 17:13:45 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:13:48.154377 | orchestrator | 2025-05-31 17:13:48 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:13:48.154482 | orchestrator | 2025-05-31 17:13:48 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:13:51.204064 | orchestrator | 2025-05-31 17:13:51 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:13:51.204178 | orchestrator | 2025-05-31 17:13:51 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:13:54.263337 | orchestrator | 2025-05-31 17:13:54 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:13:54.263687 | orchestrator | 2025-05-31 17:13:54 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:13:57.315514 | orchestrator | 2025-05-31 17:13:57 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:13:57.315619 | orchestrator | 2025-05-31 17:13:57 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:14:00.359712 | orchestrator | 2025-05-31 17:14:00 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:14:00.359854 | orchestrator | 2025-05-31 17:14:00 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:14:03.406923 | orchestrator | 2025-05-31 17:14:03 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:14:03.407053 | orchestrator | 2025-05-31 17:14:03 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:14:06.455188 | orchestrator | 2025-05-31 17:14:06 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:14:06.455360 | orchestrator | 2025-05-31 17:14:06 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:14:09.503804 | orchestrator | 2025-05-31 17:14:09 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:14:09.503917 | orchestrator | 2025-05-31 17:14:09 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:14:12.555859 | orchestrator | 2025-05-31 17:14:12 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:14:12.555991 | orchestrator | 2025-05-31 17:14:12 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:14:15.601917 | orchestrator | 2025-05-31 17:14:15 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:14:15.602105 | orchestrator | 2025-05-31 17:14:15 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:14:18.645642 | orchestrator | 2025-05-31 17:14:18 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:14:18.645750 | orchestrator | 2025-05-31 17:14:18 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:14:21.689945 | orchestrator | 2025-05-31 17:14:21 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:14:21.690112 | orchestrator | 2025-05-31 17:14:21 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:14:24.736897 | orchestrator | 2025-05-31 17:14:24 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:14:24.737003 | orchestrator | 2025-05-31 17:14:24 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:14:27.780636 | orchestrator | 2025-05-31 17:14:27 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:14:27.780739 | orchestrator | 2025-05-31 17:14:27 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:14:30.826333 | orchestrator | 2025-05-31 17:14:30 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:14:30.826413 | orchestrator | 2025-05-31 17:14:30 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:14:33.881319 | orchestrator | 2025-05-31 17:14:33 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:14:33.881426 | orchestrator | 2025-05-31 17:14:33 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:14:36.937047 | orchestrator | 2025-05-31 17:14:36 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:14:36.937156 | orchestrator | 2025-05-31 17:14:36 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:14:39.982223 | orchestrator | 2025-05-31 17:14:39 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:14:39.982387 | orchestrator | 2025-05-31 17:14:39 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:14:43.042100 | orchestrator | 2025-05-31 17:14:43 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:14:43.042202 | orchestrator | 2025-05-31 17:14:43 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:14:46.085663 | orchestrator | 2025-05-31 17:14:46 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:14:46.085770 | orchestrator | 2025-05-31 17:14:46 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:14:49.125013 | orchestrator | 2025-05-31 17:14:49 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:14:49.125122 | orchestrator | 2025-05-31 17:14:49 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:14:52.168465 | orchestrator | 2025-05-31 17:14:52 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:14:52.168889 | orchestrator | 2025-05-31 17:14:52 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:14:55.220898 | orchestrator | 2025-05-31 17:14:55 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:14:55.221006 | orchestrator | 2025-05-31 17:14:55 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:14:58.267205 | orchestrator | 2025-05-31 17:14:58 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:14:58.267349 | orchestrator | 2025-05-31 17:14:58 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:15:01.314470 | orchestrator | 2025-05-31 17:15:01 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:15:01.314569 | orchestrator | 2025-05-31 17:15:01 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:15:04.363948 | orchestrator | 2025-05-31 17:15:04 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:15:04.364055 | orchestrator | 2025-05-31 17:15:04 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:15:07.409742 | orchestrator | 2025-05-31 17:15:07 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:15:07.409849 | orchestrator | 2025-05-31 17:15:07 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:15:10.454718 | orchestrator | 2025-05-31 17:15:10 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:15:10.454819 | orchestrator | 2025-05-31 17:15:10 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:15:13.507071 | orchestrator | 2025-05-31 17:15:13 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:15:13.507179 | orchestrator | 2025-05-31 17:15:13 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:15:16.561882 | orchestrator | 2025-05-31 17:15:16 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:15:16.561986 | orchestrator | 2025-05-31 17:15:16 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:15:19.614367 | orchestrator | 2025-05-31 17:15:19 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:15:19.614487 | orchestrator | 2025-05-31 17:15:19 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:15:22.672951 | orchestrator | 2025-05-31 17:15:22 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:15:22.673052 | orchestrator | 2025-05-31 17:15:22 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:15:25.722465 | orchestrator | 2025-05-31 17:15:25 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:15:25.722576 | orchestrator | 2025-05-31 17:15:25 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:15:28.773838 | orchestrator | 2025-05-31 17:15:28 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:15:28.773944 | orchestrator | 2025-05-31 17:15:28 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:15:31.821989 | orchestrator | 2025-05-31 17:15:31 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:15:31.822167 | orchestrator | 2025-05-31 17:15:31 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:15:34.870372 | orchestrator | 2025-05-31 17:15:34 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:15:34.870473 | orchestrator | 2025-05-31 17:15:34 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:15:37.910309 | orchestrator | 2025-05-31 17:15:37 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:15:37.910483 | orchestrator | 2025-05-31 17:15:37 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:15:40.960953 | orchestrator | 2025-05-31 17:15:40 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:15:40.961056 | orchestrator | 2025-05-31 17:15:40 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:15:44.009053 | orchestrator | 2025-05-31 17:15:44 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:15:44.009164 | orchestrator | 2025-05-31 17:15:44 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:15:47.061372 | orchestrator | 2025-05-31 17:15:47 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:15:47.061473 | orchestrator | 2025-05-31 17:15:47 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:15:50.106886 | orchestrator | 2025-05-31 17:15:50 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:15:50.106995 | orchestrator | 2025-05-31 17:15:50 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:15:53.145770 | orchestrator | 2025-05-31 17:15:53 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:15:53.145882 | orchestrator | 2025-05-31 17:15:53 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:15:56.202287 | orchestrator | 2025-05-31 17:15:56 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:15:56.202447 | orchestrator | 2025-05-31 17:15:56 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:15:59.243469 | orchestrator | 2025-05-31 17:15:59 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:15:59.243583 | orchestrator | 2025-05-31 17:15:59 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:16:02.290077 | orchestrator | 2025-05-31 17:16:02 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:16:02.290227 | orchestrator | 2025-05-31 17:16:02 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:16:05.337635 | orchestrator | 2025-05-31 17:16:05 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:16:05.337733 | orchestrator | 2025-05-31 17:16:05 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:16:08.387880 | orchestrator | 2025-05-31 17:16:08 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:16:08.387986 | orchestrator | 2025-05-31 17:16:08 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:16:11.430882 | orchestrator | 2025-05-31 17:16:11 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:16:11.431072 | orchestrator | 2025-05-31 17:16:11 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:16:14.485922 | orchestrator | 2025-05-31 17:16:14 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:16:14.486114 | orchestrator | 2025-05-31 17:16:14 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:16:17.533232 | orchestrator | 2025-05-31 17:16:17 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:16:17.533328 | orchestrator | 2025-05-31 17:16:17 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:16:20.583547 | orchestrator | 2025-05-31 17:16:20 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:16:20.583659 | orchestrator | 2025-05-31 17:16:20 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:16:23.633928 | orchestrator | 2025-05-31 17:16:23 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:16:23.634085 | orchestrator | 2025-05-31 17:16:23 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:16:26.677904 | orchestrator | 2025-05-31 17:16:26 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:16:26.678826 | orchestrator | 2025-05-31 17:16:26 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:16:29.721924 | orchestrator | 2025-05-31 17:16:29 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:16:29.722096 | orchestrator | 2025-05-31 17:16:29 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:16:32.771630 | orchestrator | 2025-05-31 17:16:32 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:16:32.771733 | orchestrator | 2025-05-31 17:16:32 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:16:35.822507 | orchestrator | 2025-05-31 17:16:35 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:16:35.822612 | orchestrator | 2025-05-31 17:16:35 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:16:38.874376 | orchestrator | 2025-05-31 17:16:38 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:16:38.874531 | orchestrator | 2025-05-31 17:16:38 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:16:41.925689 | orchestrator | 2025-05-31 17:16:41 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:16:41.925787 | orchestrator | 2025-05-31 17:16:41 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:16:44.973522 | orchestrator | 2025-05-31 17:16:44 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:16:44.973631 | orchestrator | 2025-05-31 17:16:44 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:16:48.019743 | orchestrator | 2025-05-31 17:16:48 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:16:48.019833 | orchestrator | 2025-05-31 17:16:48 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:16:51.069216 | orchestrator | 2025-05-31 17:16:51 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:16:51.069296 | orchestrator | 2025-05-31 17:16:51 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:16:54.115377 | orchestrator | 2025-05-31 17:16:54 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:16:54.115531 | orchestrator | 2025-05-31 17:16:54 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:16:57.164049 | orchestrator | 2025-05-31 17:16:57 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:16:57.164157 | orchestrator | 2025-05-31 17:16:57 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:17:00.214574 | orchestrator | 2025-05-31 17:17:00 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:17:00.214696 | orchestrator | 2025-05-31 17:17:00 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:17:03.262092 | orchestrator | 2025-05-31 17:17:03 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:17:03.262197 | orchestrator | 2025-05-31 17:17:03 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:17:06.310678 | orchestrator | 2025-05-31 17:17:06 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:17:06.310775 | orchestrator | 2025-05-31 17:17:06 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:17:09.357612 | orchestrator | 2025-05-31 17:17:09 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:17:09.357713 | orchestrator | 2025-05-31 17:17:09 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:17:12.407149 | orchestrator | 2025-05-31 17:17:12 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:17:12.407261 | orchestrator | 2025-05-31 17:17:12 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:17:15.444407 | orchestrator | 2025-05-31 17:17:15 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:17:15.444552 | orchestrator | 2025-05-31 17:17:15 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:17:18.494964 | orchestrator | 2025-05-31 17:17:18 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:17:18.495069 | orchestrator | 2025-05-31 17:17:18 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:17:21.544893 | orchestrator | 2025-05-31 17:17:21 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:17:21.545003 | orchestrator | 2025-05-31 17:17:21 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:17:24.600776 | orchestrator | 2025-05-31 17:17:24 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:17:24.600879 | orchestrator | 2025-05-31 17:17:24 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:17:27.648353 | orchestrator | 2025-05-31 17:17:27 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:17:27.648544 | orchestrator | 2025-05-31 17:17:27 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:17:30.692687 | orchestrator | 2025-05-31 17:17:30 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:17:30.692807 | orchestrator | 2025-05-31 17:17:30 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:17:33.747416 | orchestrator | 2025-05-31 17:17:33 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:17:33.747604 | orchestrator | 2025-05-31 17:17:33 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:17:36.788747 | orchestrator | 2025-05-31 17:17:36 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:17:36.788857 | orchestrator | 2025-05-31 17:17:36 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:17:39.836731 | orchestrator | 2025-05-31 17:17:39 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:17:39.836833 | orchestrator | 2025-05-31 17:17:39 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:17:42.885625 | orchestrator | 2025-05-31 17:17:42 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:17:42.885741 | orchestrator | 2025-05-31 17:17:42 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:17:45.930383 | orchestrator | 2025-05-31 17:17:45 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:17:45.930605 | orchestrator | 2025-05-31 17:17:45 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:17:48.978931 | orchestrator | 2025-05-31 17:17:48 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:17:48.979039 | orchestrator | 2025-05-31 17:17:48 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:17:52.029951 | orchestrator | 2025-05-31 17:17:52 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:17:52.030142 | orchestrator | 2025-05-31 17:17:52 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:17:55.075451 | orchestrator | 2025-05-31 17:17:55 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:17:55.075633 | orchestrator | 2025-05-31 17:17:55 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:17:58.123081 | orchestrator | 2025-05-31 17:17:58 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:17:58.123208 | orchestrator | 2025-05-31 17:17:58 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:18:01.169757 | orchestrator | 2025-05-31 17:18:01 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:18:01.169867 | orchestrator | 2025-05-31 17:18:01 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:18:04.218185 | orchestrator | 2025-05-31 17:18:04 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:18:04.218293 | orchestrator | 2025-05-31 17:18:04 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:18:07.260155 | orchestrator | 2025-05-31 17:18:07 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:18:07.260302 | orchestrator | 2025-05-31 17:18:07 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:18:10.304341 | orchestrator | 2025-05-31 17:18:10 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:18:10.304452 | orchestrator | 2025-05-31 17:18:10 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:18:13.354879 | orchestrator | 2025-05-31 17:18:13 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:18:13.355020 | orchestrator | 2025-05-31 17:18:13 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:18:16.410317 | orchestrator | 2025-05-31 17:18:16 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:18:16.410445 | orchestrator | 2025-05-31 17:18:16 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:18:19.460772 | orchestrator | 2025-05-31 17:18:19 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:18:19.460905 | orchestrator | 2025-05-31 17:18:19 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:18:22.505486 | orchestrator | 2025-05-31 17:18:22 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:18:22.505669 | orchestrator | 2025-05-31 17:18:22 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:18:25.551604 | orchestrator | 2025-05-31 17:18:25 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:18:25.551737 | orchestrator | 2025-05-31 17:18:25 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:18:28.597816 | orchestrator | 2025-05-31 17:18:28 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:18:28.597945 | orchestrator | 2025-05-31 17:18:28 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:18:31.649312 | orchestrator | 2025-05-31 17:18:31 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:18:31.649483 | orchestrator | 2025-05-31 17:18:31 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:18:34.701649 | orchestrator | 2025-05-31 17:18:34 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:18:34.701752 | orchestrator | 2025-05-31 17:18:34 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:18:37.757435 | orchestrator | 2025-05-31 17:18:37 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:18:37.757599 | orchestrator | 2025-05-31 17:18:37 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:18:40.799459 | orchestrator | 2025-05-31 17:18:40 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:18:40.799651 | orchestrator | 2025-05-31 17:18:40 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:18:43.844974 | orchestrator | 2025-05-31 17:18:43 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:18:43.845110 | orchestrator | 2025-05-31 17:18:43 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:18:46.888074 | orchestrator | 2025-05-31 17:18:46 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:18:46.888175 | orchestrator | 2025-05-31 17:18:46 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:18:49.930128 | orchestrator | 2025-05-31 17:18:49 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:18:49.930236 | orchestrator | 2025-05-31 17:18:49 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:18:52.983008 | orchestrator | 2025-05-31 17:18:52 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:18:52.983114 | orchestrator | 2025-05-31 17:18:52 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:18:56.033752 | orchestrator | 2025-05-31 17:18:56 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:18:56.033867 | orchestrator | 2025-05-31 17:18:56 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:18:59.081695 | orchestrator | 2025-05-31 17:18:59 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:18:59.081799 | orchestrator | 2025-05-31 17:18:59 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:19:02.132965 | orchestrator | 2025-05-31 17:19:02 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:19:02.133072 | orchestrator | 2025-05-31 17:19:02 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:19:05.184207 | orchestrator | 2025-05-31 17:19:05 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:19:05.184313 | orchestrator | 2025-05-31 17:19:05 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:19:08.255214 | orchestrator | 2025-05-31 17:19:08 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:19:08.255303 | orchestrator | 2025-05-31 17:19:08 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:19:11.305937 | orchestrator | 2025-05-31 17:19:11 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:19:11.306119 | orchestrator | 2025-05-31 17:19:11 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:19:14.353406 | orchestrator | 2025-05-31 17:19:14 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:19:14.353487 | orchestrator | 2025-05-31 17:19:14 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:19:17.398272 | orchestrator | 2025-05-31 17:19:17 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:19:17.398401 | orchestrator | 2025-05-31 17:19:17 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:19:20.452017 | orchestrator | 2025-05-31 17:19:20 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:19:20.452123 | orchestrator | 2025-05-31 17:19:20 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:19:23.502571 | orchestrator | 2025-05-31 17:19:23 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:19:23.502763 | orchestrator | 2025-05-31 17:19:23 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:19:26.547823 | orchestrator | 2025-05-31 17:19:26 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:19:26.547924 | orchestrator | 2025-05-31 17:19:26 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:19:29.605888 | orchestrator | 2025-05-31 17:19:29 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:19:29.605997 | orchestrator | 2025-05-31 17:19:29 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:19:32.653316 | orchestrator | 2025-05-31 17:19:32 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:19:32.653421 | orchestrator | 2025-05-31 17:19:32 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:19:35.699272 | orchestrator | 2025-05-31 17:19:35 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:19:35.699370 | orchestrator | 2025-05-31 17:19:35 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:19:38.744463 | orchestrator | 2025-05-31 17:19:38 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:19:38.744556 | orchestrator | 2025-05-31 17:19:38 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:19:41.785122 | orchestrator | 2025-05-31 17:19:41 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:19:41.785228 | orchestrator | 2025-05-31 17:19:41 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:19:44.828362 | orchestrator | 2025-05-31 17:19:44 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:19:44.828477 | orchestrator | 2025-05-31 17:19:44 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:19:47.880749 | orchestrator | 2025-05-31 17:19:47 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:19:47.880851 | orchestrator | 2025-05-31 17:19:47 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:19:50.926412 | orchestrator | 2025-05-31 17:19:50 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:19:50.926513 | orchestrator | 2025-05-31 17:19:50 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:19:53.974127 | orchestrator | 2025-05-31 17:19:53 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:19:53.974245 | orchestrator | 2025-05-31 17:19:53 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:19:57.024705 | orchestrator | 2025-05-31 17:19:57 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:19:57.024809 | orchestrator | 2025-05-31 17:19:57 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:20:00.070495 | orchestrator | 2025-05-31 17:20:00 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:20:00.070602 | orchestrator | 2025-05-31 17:20:00 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:20:03.119308 | orchestrator | 2025-05-31 17:20:03 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:20:03.119441 | orchestrator | 2025-05-31 17:20:03 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:20:06.172845 | orchestrator | 2025-05-31 17:20:06 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:20:06.172954 | orchestrator | 2025-05-31 17:20:06 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:20:09.224373 | orchestrator | 2025-05-31 17:20:09 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:20:09.224464 | orchestrator | 2025-05-31 17:20:09 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:20:12.273120 | orchestrator | 2025-05-31 17:20:12 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:20:12.273226 | orchestrator | 2025-05-31 17:20:12 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:20:15.325381 | orchestrator | 2025-05-31 17:20:15 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:20:15.325485 | orchestrator | 2025-05-31 17:20:15 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:20:18.372764 | orchestrator | 2025-05-31 17:20:18 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:20:18.372849 | orchestrator | 2025-05-31 17:20:18 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:20:21.420303 | orchestrator | 2025-05-31 17:20:21 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:20:21.420411 | orchestrator | 2025-05-31 17:20:21 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:20:24.472937 | orchestrator | 2025-05-31 17:20:24 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:20:24.473044 | orchestrator | 2025-05-31 17:20:24 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:20:27.517381 | orchestrator | 2025-05-31 17:20:27 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:20:27.517488 | orchestrator | 2025-05-31 17:20:27 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:20:30.567970 | orchestrator | 2025-05-31 17:20:30 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:20:30.568074 | orchestrator | 2025-05-31 17:20:30 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:20:33.620450 | orchestrator | 2025-05-31 17:20:33 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:20:33.620582 | orchestrator | 2025-05-31 17:20:33 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:20:36.670661 | orchestrator | 2025-05-31 17:20:36 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:20:36.670851 | orchestrator | 2025-05-31 17:20:36 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:20:39.728767 | orchestrator | 2025-05-31 17:20:39 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:20:39.729310 | orchestrator | 2025-05-31 17:20:39 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:20:42.766879 | orchestrator | 2025-05-31 17:20:42 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:20:42.766977 | orchestrator | 2025-05-31 17:20:42 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:20:45.818206 | orchestrator | 2025-05-31 17:20:45 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:20:45.818311 | orchestrator | 2025-05-31 17:20:45 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:20:48.865209 | orchestrator | 2025-05-31 17:20:48 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:20:48.865345 | orchestrator | 2025-05-31 17:20:48 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:20:51.916825 | orchestrator | 2025-05-31 17:20:51 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:20:51.916931 | orchestrator | 2025-05-31 17:20:51 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:20:54.964425 | orchestrator | 2025-05-31 17:20:54 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:20:54.964532 | orchestrator | 2025-05-31 17:20:54 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:20:58.008922 | orchestrator | 2025-05-31 17:20:58 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:20:58.008987 | orchestrator | 2025-05-31 17:20:58 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:21:01.065363 | orchestrator | 2025-05-31 17:21:01 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:21:01.065975 | orchestrator | 2025-05-31 17:21:01 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:21:04.119033 | orchestrator | 2025-05-31 17:21:04 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:21:04.119143 | orchestrator | 2025-05-31 17:21:04 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:21:07.166926 | orchestrator | 2025-05-31 17:21:07 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:21:07.167044 | orchestrator | 2025-05-31 17:21:07 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:21:10.212838 | orchestrator | 2025-05-31 17:21:10 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:21:10.212946 | orchestrator | 2025-05-31 17:21:10 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:21:13.264501 | orchestrator | 2025-05-31 17:21:13 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:21:13.264603 | orchestrator | 2025-05-31 17:21:13 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:21:16.314397 | orchestrator | 2025-05-31 17:21:16 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:21:16.314510 | orchestrator | 2025-05-31 17:21:16 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:21:19.364433 | orchestrator | 2025-05-31 17:21:19 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:21:19.364534 | orchestrator | 2025-05-31 17:21:19 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:21:22.419740 | orchestrator | 2025-05-31 17:21:22 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:21:22.419905 | orchestrator | 2025-05-31 17:21:22 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:21:25.470331 | orchestrator | 2025-05-31 17:21:25 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:21:25.470436 | orchestrator | 2025-05-31 17:21:25 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:21:28.520156 | orchestrator | 2025-05-31 17:21:28 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:21:28.520253 | orchestrator | 2025-05-31 17:21:28 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:21:31.581367 | orchestrator | 2025-05-31 17:21:31 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:21:31.581469 | orchestrator | 2025-05-31 17:21:31 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:21:34.628610 | orchestrator | 2025-05-31 17:21:34 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:21:34.628766 | orchestrator | 2025-05-31 17:21:34 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:21:37.680144 | orchestrator | 2025-05-31 17:21:37 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:21:37.680254 | orchestrator | 2025-05-31 17:21:37 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:21:40.731431 | orchestrator | 2025-05-31 17:21:40 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:21:40.731537 | orchestrator | 2025-05-31 17:21:40 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:21:43.779156 | orchestrator | 2025-05-31 17:21:43 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:21:43.779256 | orchestrator | 2025-05-31 17:21:43 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:21:46.825143 | orchestrator | 2025-05-31 17:21:46 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:21:46.825241 | orchestrator | 2025-05-31 17:21:46 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:21:49.870176 | orchestrator | 2025-05-31 17:21:49 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:21:49.870277 | orchestrator | 2025-05-31 17:21:49 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:21:52.914505 | orchestrator | 2025-05-31 17:21:52 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:21:52.914610 | orchestrator | 2025-05-31 17:21:52 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:21:55.958732 | orchestrator | 2025-05-31 17:21:55 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:21:55.958884 | orchestrator | 2025-05-31 17:21:55 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:21:59.019672 | orchestrator | 2025-05-31 17:21:59 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:21:59.019780 | orchestrator | 2025-05-31 17:21:59 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:22:02.063692 | orchestrator | 2025-05-31 17:22:02 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:22:02.063799 | orchestrator | 2025-05-31 17:22:02 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:22:05.114825 | orchestrator | 2025-05-31 17:22:05 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:22:05.114948 | orchestrator | 2025-05-31 17:22:05 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:22:08.168431 | orchestrator | 2025-05-31 17:22:08 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:22:08.168543 | orchestrator | 2025-05-31 17:22:08 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:22:11.226114 | orchestrator | 2025-05-31 17:22:11 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:22:11.226197 | orchestrator | 2025-05-31 17:22:11 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:22:14.278793 | orchestrator | 2025-05-31 17:22:14 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:22:14.279123 | orchestrator | 2025-05-31 17:22:14 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:22:17.327582 | orchestrator | 2025-05-31 17:22:17 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:22:17.327691 | orchestrator | 2025-05-31 17:22:17 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:22:20.381476 | orchestrator | 2025-05-31 17:22:20 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:22:20.381615 | orchestrator | 2025-05-31 17:22:20 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:22:23.438606 | orchestrator | 2025-05-31 17:22:23 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:22:23.438729 | orchestrator | 2025-05-31 17:22:23 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:22:26.494107 | orchestrator | 2025-05-31 17:22:26 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:22:26.494216 | orchestrator | 2025-05-31 17:22:26 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:22:29.539372 | orchestrator | 2025-05-31 17:22:29 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:22:29.539468 | orchestrator | 2025-05-31 17:22:29 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:22:32.586331 | orchestrator | 2025-05-31 17:22:32 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:22:32.586436 | orchestrator | 2025-05-31 17:22:32 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:22:35.636304 | orchestrator | 2025-05-31 17:22:35 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:22:35.636408 | orchestrator | 2025-05-31 17:22:35 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:22:38.692214 | orchestrator | 2025-05-31 17:22:38 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:22:38.692335 | orchestrator | 2025-05-31 17:22:38 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:22:41.750578 | orchestrator | 2025-05-31 17:22:41 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:22:41.750687 | orchestrator | 2025-05-31 17:22:41 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:22:44.806403 | orchestrator | 2025-05-31 17:22:44 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:22:44.806507 | orchestrator | 2025-05-31 17:22:44 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:22:47.859458 | orchestrator | 2025-05-31 17:22:47 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:22:47.859571 | orchestrator | 2025-05-31 17:22:47 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:22:50.912026 | orchestrator | 2025-05-31 17:22:50 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:22:50.912186 | orchestrator | 2025-05-31 17:22:50 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:22:53.965286 | orchestrator | 2025-05-31 17:22:53 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:22:53.965476 | orchestrator | 2025-05-31 17:22:53 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:22:57.017468 | orchestrator | 2025-05-31 17:22:57 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:22:57.018700 | orchestrator | 2025-05-31 17:22:57 | INFO  | Task 91f0e65b-96c3-4306-b946-3c9a5263b647 is in state STARTED 2025-05-31 17:22:57.018998 | orchestrator | 2025-05-31 17:22:57 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:23:00.071048 | orchestrator | 2025-05-31 17:23:00 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:23:00.071506 | orchestrator | 2025-05-31 17:23:00 | INFO  | Task 91f0e65b-96c3-4306-b946-3c9a5263b647 is in state STARTED 2025-05-31 17:23:00.071964 | orchestrator | 2025-05-31 17:23:00 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:23:03.128878 | orchestrator | 2025-05-31 17:23:03 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:23:03.130729 | orchestrator | 2025-05-31 17:23:03 | INFO  | Task 91f0e65b-96c3-4306-b946-3c9a5263b647 is in state STARTED 2025-05-31 17:23:03.130925 | orchestrator | 2025-05-31 17:23:03 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:23:06.195568 | orchestrator | 2025-05-31 17:23:06 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:23:06.196139 | orchestrator | 2025-05-31 17:23:06 | INFO  | Task 91f0e65b-96c3-4306-b946-3c9a5263b647 is in state SUCCESS 2025-05-31 17:23:06.196188 | orchestrator | 2025-05-31 17:23:06 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:23:09.244453 | orchestrator | 2025-05-31 17:23:09 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:23:09.244559 | orchestrator | 2025-05-31 17:23:09 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:23:12.288481 | orchestrator | 2025-05-31 17:23:12 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:23:12.288608 | orchestrator | 2025-05-31 17:23:12 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:23:15.335875 | orchestrator | 2025-05-31 17:23:15 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:23:15.336018 | orchestrator | 2025-05-31 17:23:15 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:23:18.390529 | orchestrator | 2025-05-31 17:23:18 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:23:18.390626 | orchestrator | 2025-05-31 17:23:18 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:23:21.430422 | orchestrator | 2025-05-31 17:23:21 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:23:21.430523 | orchestrator | 2025-05-31 17:23:21 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:23:24.478815 | orchestrator | 2025-05-31 17:23:24 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:23:24.478959 | orchestrator | 2025-05-31 17:23:24 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:23:27.523777 | orchestrator | 2025-05-31 17:23:27 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:23:27.523854 | orchestrator | 2025-05-31 17:23:27 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:23:30.584822 | orchestrator | 2025-05-31 17:23:30 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:23:30.584988 | orchestrator | 2025-05-31 17:23:30 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:23:33.634384 | orchestrator | 2025-05-31 17:23:33 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:23:33.634458 | orchestrator | 2025-05-31 17:23:33 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:23:36.684001 | orchestrator | 2025-05-31 17:23:36 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:23:36.684095 | orchestrator | 2025-05-31 17:23:36 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:23:39.735007 | orchestrator | 2025-05-31 17:23:39 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:23:39.735108 | orchestrator | 2025-05-31 17:23:39 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:23:42.784436 | orchestrator | 2025-05-31 17:23:42 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:23:42.784537 | orchestrator | 2025-05-31 17:23:42 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:23:45.831572 | orchestrator | 2025-05-31 17:23:45 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:23:45.831715 | orchestrator | 2025-05-31 17:23:45 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:23:48.880464 | orchestrator | 2025-05-31 17:23:48 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:23:48.880591 | orchestrator | 2025-05-31 17:23:48 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:23:51.936471 | orchestrator | 2025-05-31 17:23:51 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:23:51.936558 | orchestrator | 2025-05-31 17:23:51 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:23:54.981373 | orchestrator | 2025-05-31 17:23:54 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:23:54.981479 | orchestrator | 2025-05-31 17:23:54 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:23:58.033588 | orchestrator | 2025-05-31 17:23:58 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:23:58.033688 | orchestrator | 2025-05-31 17:23:58 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:24:01.075680 | orchestrator | 2025-05-31 17:24:01 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:24:01.075787 | orchestrator | 2025-05-31 17:24:01 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:24:04.125103 | orchestrator | 2025-05-31 17:24:04 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:24:04.125206 | orchestrator | 2025-05-31 17:24:04 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:24:07.174287 | orchestrator | 2025-05-31 17:24:07 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:24:07.174394 | orchestrator | 2025-05-31 17:24:07 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:24:10.225739 | orchestrator | 2025-05-31 17:24:10 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:24:10.225836 | orchestrator | 2025-05-31 17:24:10 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:24:13.277060 | orchestrator | 2025-05-31 17:24:13 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:24:13.277145 | orchestrator | 2025-05-31 17:24:13 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:24:16.334477 | orchestrator | 2025-05-31 17:24:16 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:24:16.334577 | orchestrator | 2025-05-31 17:24:16 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:24:19.380535 | orchestrator | 2025-05-31 17:24:19 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:24:19.380640 | orchestrator | 2025-05-31 17:24:19 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:24:22.425504 | orchestrator | 2025-05-31 17:24:22 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:24:22.425627 | orchestrator | 2025-05-31 17:24:22 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:24:25.478838 | orchestrator | 2025-05-31 17:24:25 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:24:25.478953 | orchestrator | 2025-05-31 17:24:25 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:24:28.531502 | orchestrator | 2025-05-31 17:24:28 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:24:28.531603 | orchestrator | 2025-05-31 17:24:28 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:24:31.592818 | orchestrator | 2025-05-31 17:24:31 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:24:31.592893 | orchestrator | 2025-05-31 17:24:31 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:24:34.652545 | orchestrator | 2025-05-31 17:24:34 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:24:34.652656 | orchestrator | 2025-05-31 17:24:34 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:24:37.716795 | orchestrator | 2025-05-31 17:24:37 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:24:37.716918 | orchestrator | 2025-05-31 17:24:37 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:24:40.770812 | orchestrator | 2025-05-31 17:24:40 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:24:40.770925 | orchestrator | 2025-05-31 17:24:40 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:24:43.824078 | orchestrator | 2025-05-31 17:24:43 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:24:43.824187 | orchestrator | 2025-05-31 17:24:43 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:24:46.870537 | orchestrator | 2025-05-31 17:24:46 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:24:46.870649 | orchestrator | 2025-05-31 17:24:46 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:24:49.923162 | orchestrator | 2025-05-31 17:24:49 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:24:49.923271 | orchestrator | 2025-05-31 17:24:49 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:24:52.975333 | orchestrator | 2025-05-31 17:24:52 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:24:52.975442 | orchestrator | 2025-05-31 17:24:52 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:24:56.030852 | orchestrator | 2025-05-31 17:24:56 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:24:56.030974 | orchestrator | 2025-05-31 17:24:56 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:24:59.082955 | orchestrator | 2025-05-31 17:24:59 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:24:59.083239 | orchestrator | 2025-05-31 17:24:59 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:25:02.138727 | orchestrator | 2025-05-31 17:25:02 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:25:02.138851 | orchestrator | 2025-05-31 17:25:02 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:25:05.189912 | orchestrator | 2025-05-31 17:25:05 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:25:05.190122 | orchestrator | 2025-05-31 17:25:05 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:25:08.237986 | orchestrator | 2025-05-31 17:25:08 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:25:08.238215 | orchestrator | 2025-05-31 17:25:08 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:25:11.284444 | orchestrator | 2025-05-31 17:25:11 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:25:11.284549 | orchestrator | 2025-05-31 17:25:11 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:25:14.335308 | orchestrator | 2025-05-31 17:25:14 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:25:14.335395 | orchestrator | 2025-05-31 17:25:14 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:25:17.392155 | orchestrator | 2025-05-31 17:25:17 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:25:17.392261 | orchestrator | 2025-05-31 17:25:17 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:25:20.437558 | orchestrator | 2025-05-31 17:25:20 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:25:20.437658 | orchestrator | 2025-05-31 17:25:20 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:25:23.499427 | orchestrator | 2025-05-31 17:25:23 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:25:23.499535 | orchestrator | 2025-05-31 17:25:23 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:25:26.552001 | orchestrator | 2025-05-31 17:25:26 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:25:26.552175 | orchestrator | 2025-05-31 17:25:26 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:25:29.596172 | orchestrator | 2025-05-31 17:25:29 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:25:29.597390 | orchestrator | 2025-05-31 17:25:29 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:25:32.649229 | orchestrator | 2025-05-31 17:25:32 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:25:32.649332 | orchestrator | 2025-05-31 17:25:32 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:25:35.703646 | orchestrator | 2025-05-31 17:25:35 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:25:35.703733 | orchestrator | 2025-05-31 17:25:35 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:25:38.759921 | orchestrator | 2025-05-31 17:25:38 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:25:38.760066 | orchestrator | 2025-05-31 17:25:38 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:25:41.812004 | orchestrator | 2025-05-31 17:25:41 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:25:41.812126 | orchestrator | 2025-05-31 17:25:41 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:25:44.874863 | orchestrator | 2025-05-31 17:25:44 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:25:44.874970 | orchestrator | 2025-05-31 17:25:44 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:25:47.925510 | orchestrator | 2025-05-31 17:25:47 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:25:47.925617 | orchestrator | 2025-05-31 17:25:47 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:25:50.977421 | orchestrator | 2025-05-31 17:25:50 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:25:50.977509 | orchestrator | 2025-05-31 17:25:50 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:25:54.038321 | orchestrator | 2025-05-31 17:25:54 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:25:54.038427 | orchestrator | 2025-05-31 17:25:54 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:25:57.087787 | orchestrator | 2025-05-31 17:25:57 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:25:57.087906 | orchestrator | 2025-05-31 17:25:57 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:26:00.132136 | orchestrator | 2025-05-31 17:26:00 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:26:00.132243 | orchestrator | 2025-05-31 17:26:00 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:26:03.180357 | orchestrator | 2025-05-31 17:26:03 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:26:03.180464 | orchestrator | 2025-05-31 17:26:03 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:26:06.233267 | orchestrator | 2025-05-31 17:26:06 | INFO  | Task f2bae605-c816-4e28-b7ae-14a464bce7fe is in state STARTED 2025-05-31 17:26:06.233379 | orchestrator | 2025-05-31 17:26:06 | INFO  | Wait 1 second(s) until the next check 2025-05-31 17:26:07.127553 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-05-31 17:26:07.132051 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-31 17:26:07.912795 | 2025-05-31 17:26:07.912980 | PLAY [Post output play] 2025-05-31 17:26:07.930735 | 2025-05-31 17:26:07.930904 | LOOP [stage-output : Register sources] 2025-05-31 17:26:08.002462 | 2025-05-31 17:26:08.002787 | TASK [stage-output : Check sudo] 2025-05-31 17:26:08.944783 | orchestrator | sudo: a password is required 2025-05-31 17:26:09.042822 | orchestrator | ok: Runtime: 0:00:00.016667 2025-05-31 17:26:09.060827 | 2025-05-31 17:26:09.061020 | LOOP [stage-output : Set source and destination for files and folders] 2025-05-31 17:26:09.102294 | 2025-05-31 17:26:09.102616 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-05-31 17:26:09.181590 | orchestrator | ok 2025-05-31 17:26:09.190543 | 2025-05-31 17:26:09.190684 | LOOP [stage-output : Ensure target folders exist] 2025-05-31 17:26:09.675506 | orchestrator | ok: "docs" 2025-05-31 17:26:09.675864 | 2025-05-31 17:26:09.945284 | orchestrator | ok: "artifacts" 2025-05-31 17:26:10.218257 | orchestrator | ok: "logs" 2025-05-31 17:26:10.239994 | 2025-05-31 17:26:10.240238 | LOOP [stage-output : Copy files and folders to staging folder] 2025-05-31 17:26:10.279952 | 2025-05-31 17:26:10.280272 | TASK [stage-output : Make all log files readable] 2025-05-31 17:26:10.585018 | orchestrator | ok 2025-05-31 17:26:10.595099 | 2025-05-31 17:26:10.595318 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-05-31 17:26:10.630939 | orchestrator | skipping: Conditional result was False 2025-05-31 17:26:10.646075 | 2025-05-31 17:26:10.646312 | TASK [stage-output : Discover log files for compression] 2025-05-31 17:26:10.667243 | orchestrator | skipping: Conditional result was False 2025-05-31 17:26:10.677684 | 2025-05-31 17:26:10.677892 | LOOP [stage-output : Archive everything from logs] 2025-05-31 17:26:10.733948 | 2025-05-31 17:26:10.734226 | PLAY [Post cleanup play] 2025-05-31 17:26:10.746105 | 2025-05-31 17:26:10.746275 | TASK [Set cloud fact (Zuul deployment)] 2025-05-31 17:26:10.805055 | orchestrator | ok 2025-05-31 17:26:10.817087 | 2025-05-31 17:26:10.817291 | TASK [Set cloud fact (local deployment)] 2025-05-31 17:26:10.852461 | orchestrator | skipping: Conditional result was False 2025-05-31 17:26:10.869750 | 2025-05-31 17:26:10.869898 | TASK [Clean the cloud environment] 2025-05-31 17:26:11.485922 | orchestrator | 2025-05-31 17:26:11 - clean up servers 2025-05-31 17:26:12.325585 | orchestrator | 2025-05-31 17:26:12 - testbed-manager 2025-05-31 17:26:12.405752 | orchestrator | 2025-05-31 17:26:12 - testbed-node-2 2025-05-31 17:26:12.502351 | orchestrator | 2025-05-31 17:26:12 - testbed-node-3 2025-05-31 17:26:12.591710 | orchestrator | 2025-05-31 17:26:12 - testbed-node-5 2025-05-31 17:26:12.681016 | orchestrator | 2025-05-31 17:26:12 - testbed-node-1 2025-05-31 17:26:12.770839 | orchestrator | 2025-05-31 17:26:12 - testbed-node-0 2025-05-31 17:26:12.856292 | orchestrator | 2025-05-31 17:26:12 - testbed-node-4 2025-05-31 17:26:12.947417 | orchestrator | 2025-05-31 17:26:12 - clean up keypairs 2025-05-31 17:26:12.966734 | orchestrator | 2025-05-31 17:26:12 - testbed 2025-05-31 17:26:12.993813 | orchestrator | 2025-05-31 17:26:12 - wait for servers to be gone 2025-05-31 17:26:21.671155 | orchestrator | 2025-05-31 17:26:21 - clean up ports 2025-05-31 17:26:21.850236 | orchestrator | 2025-05-31 17:26:21 - 05e5112f-46cc-4885-a898-4f8075927728 2025-05-31 17:26:22.736203 | orchestrator | 2025-05-31 17:26:22 - 40372515-66e1-4ef4-b683-07d7d58556a9 2025-05-31 17:26:23.001017 | orchestrator | 2025-05-31 17:26:23 - 94b6b02d-7412-4c9c-a9c3-582490fcce2a 2025-05-31 17:26:23.662233 | orchestrator | 2025-05-31 17:26:23 - aca20198-4a2a-4050-8f54-ec36c49b2ca7 2025-05-31 17:26:23.864718 | orchestrator | 2025-05-31 17:26:23 - bb9015a2-53f4-4a54-8330-c386abf13ac2 2025-05-31 17:26:24.073349 | orchestrator | 2025-05-31 17:26:24 - e92c3cdf-be68-4502-8bf9-e759fca73528 2025-05-31 17:26:24.283298 | orchestrator | 2025-05-31 17:26:24 - f8be698a-cfd1-4e06-92d2-36dcd4637ca9 2025-05-31 17:26:24.484640 | orchestrator | 2025-05-31 17:26:24 - clean up volumes 2025-05-31 17:26:24.603706 | orchestrator | 2025-05-31 17:26:24 - testbed-volume-0-node-base 2025-05-31 17:26:24.640999 | orchestrator | 2025-05-31 17:26:24 - testbed-volume-4-node-base 2025-05-31 17:26:24.679276 | orchestrator | 2025-05-31 17:26:24 - testbed-volume-2-node-base 2025-05-31 17:26:24.722313 | orchestrator | 2025-05-31 17:26:24 - testbed-volume-3-node-base 2025-05-31 17:26:24.764760 | orchestrator | 2025-05-31 17:26:24 - testbed-volume-manager-base 2025-05-31 17:26:24.807859 | orchestrator | 2025-05-31 17:26:24 - testbed-volume-5-node-base 2025-05-31 17:26:24.848980 | orchestrator | 2025-05-31 17:26:24 - testbed-volume-1-node-base 2025-05-31 17:26:24.890961 | orchestrator | 2025-05-31 17:26:24 - testbed-volume-1-node-4 2025-05-31 17:26:24.933412 | orchestrator | 2025-05-31 17:26:24 - testbed-volume-4-node-4 2025-05-31 17:26:24.977596 | orchestrator | 2025-05-31 17:26:24 - testbed-volume-6-node-3 2025-05-31 17:26:25.022353 | orchestrator | 2025-05-31 17:26:25 - testbed-volume-2-node-5 2025-05-31 17:26:25.061769 | orchestrator | 2025-05-31 17:26:25 - testbed-volume-8-node-5 2025-05-31 17:26:25.105755 | orchestrator | 2025-05-31 17:26:25 - testbed-volume-7-node-4 2025-05-31 17:26:25.147485 | orchestrator | 2025-05-31 17:26:25 - testbed-volume-0-node-3 2025-05-31 17:26:25.192165 | orchestrator | 2025-05-31 17:26:25 - testbed-volume-5-node-5 2025-05-31 17:26:25.240689 | orchestrator | 2025-05-31 17:26:25 - testbed-volume-3-node-3 2025-05-31 17:26:25.285333 | orchestrator | 2025-05-31 17:26:25 - disconnect routers 2025-05-31 17:26:25.403628 | orchestrator | 2025-05-31 17:26:25 - testbed 2025-05-31 17:26:26.307484 | orchestrator | 2025-05-31 17:26:26 - clean up subnets 2025-05-31 17:26:26.345454 | orchestrator | 2025-05-31 17:26:26 - subnet-testbed-management 2025-05-31 17:26:26.493565 | orchestrator | 2025-05-31 17:26:26 - clean up networks 2025-05-31 17:26:26.663286 | orchestrator | 2025-05-31 17:26:26 - net-testbed-management 2025-05-31 17:26:26.919822 | orchestrator | 2025-05-31 17:26:26 - clean up security groups 2025-05-31 17:26:26.968235 | orchestrator | 2025-05-31 17:26:26 - testbed-management 2025-05-31 17:26:27.079492 | orchestrator | 2025-05-31 17:26:27 - testbed-node 2025-05-31 17:26:27.266229 | orchestrator | 2025-05-31 17:26:27 - clean up floating ips 2025-05-31 17:26:27.301725 | orchestrator | 2025-05-31 17:26:27 - 81.163.193.95 2025-05-31 17:26:27.667631 | orchestrator | 2025-05-31 17:26:27 - clean up routers 2025-05-31 17:26:27.763876 | orchestrator | 2025-05-31 17:26:27 - testbed 2025-05-31 17:26:29.427031 | orchestrator | ok: Runtime: 0:00:17.923033 2025-05-31 17:26:29.431417 | 2025-05-31 17:26:29.431562 | PLAY RECAP 2025-05-31 17:26:29.431663 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-05-31 17:26:29.431714 | 2025-05-31 17:26:29.571958 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-31 17:26:29.573351 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-05-31 17:26:30.351857 | 2025-05-31 17:26:30.352043 | PLAY [Cleanup play] 2025-05-31 17:26:30.368304 | 2025-05-31 17:26:30.368440 | TASK [Set cloud fact (Zuul deployment)] 2025-05-31 17:26:30.422990 | orchestrator | ok 2025-05-31 17:26:30.431614 | 2025-05-31 17:26:30.431762 | TASK [Set cloud fact (local deployment)] 2025-05-31 17:26:30.466704 | orchestrator | skipping: Conditional result was False 2025-05-31 17:26:30.482342 | 2025-05-31 17:26:30.482498 | TASK [Clean the cloud environment] 2025-05-31 17:26:31.641914 | orchestrator | 2025-05-31 17:26:31 - clean up servers 2025-05-31 17:26:32.142728 | orchestrator | 2025-05-31 17:26:32 - clean up keypairs 2025-05-31 17:26:32.159902 | orchestrator | 2025-05-31 17:26:32 - wait for servers to be gone 2025-05-31 17:26:32.202137 | orchestrator | 2025-05-31 17:26:32 - clean up ports 2025-05-31 17:26:32.272589 | orchestrator | 2025-05-31 17:26:32 - clean up volumes 2025-05-31 17:26:32.333145 | orchestrator | 2025-05-31 17:26:32 - disconnect routers 2025-05-31 17:26:32.359175 | orchestrator | 2025-05-31 17:26:32 - clean up subnets 2025-05-31 17:26:32.382912 | orchestrator | 2025-05-31 17:26:32 - clean up networks 2025-05-31 17:26:32.538347 | orchestrator | 2025-05-31 17:26:32 - clean up security groups 2025-05-31 17:26:32.579172 | orchestrator | 2025-05-31 17:26:32 - clean up floating ips 2025-05-31 17:26:32.606348 | orchestrator | 2025-05-31 17:26:32 - clean up routers 2025-05-31 17:26:33.021129 | orchestrator | ok: Runtime: 0:00:01.367427 2025-05-31 17:26:33.023215 | 2025-05-31 17:26:33.023315 | PLAY RECAP 2025-05-31 17:26:33.023378 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-05-31 17:26:33.023406 | 2025-05-31 17:26:33.182768 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-05-31 17:26:33.185384 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-31 17:26:33.997615 | 2025-05-31 17:26:33.997821 | PLAY [Base post-fetch] 2025-05-31 17:26:34.014483 | 2025-05-31 17:26:34.014653 | TASK [fetch-output : Set log path for multiple nodes] 2025-05-31 17:26:34.070622 | orchestrator | skipping: Conditional result was False 2025-05-31 17:26:34.083035 | 2025-05-31 17:26:34.083298 | TASK [fetch-output : Set log path for single node] 2025-05-31 17:26:34.129010 | orchestrator | ok 2025-05-31 17:26:34.138615 | 2025-05-31 17:26:34.138761 | LOOP [fetch-output : Ensure local output dirs] 2025-05-31 17:26:34.631904 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/6e00cdb8ce344775ac0ff3220588f416/work/logs" 2025-05-31 17:26:34.903990 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/6e00cdb8ce344775ac0ff3220588f416/work/artifacts" 2025-05-31 17:26:35.163717 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/6e00cdb8ce344775ac0ff3220588f416/work/docs" 2025-05-31 17:26:35.183568 | 2025-05-31 17:26:35.183736 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-05-31 17:26:36.156068 | orchestrator | changed: .d..t...... ./ 2025-05-31 17:26:36.156468 | orchestrator | changed: All items complete 2025-05-31 17:26:36.156523 | 2025-05-31 17:26:36.924337 | orchestrator | changed: .d..t...... ./ 2025-05-31 17:26:37.674333 | orchestrator | changed: .d..t...... ./ 2025-05-31 17:26:37.697991 | 2025-05-31 17:26:37.698110 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-05-31 17:26:37.739965 | orchestrator | skipping: Conditional result was False 2025-05-31 17:26:37.743632 | orchestrator | skipping: Conditional result was False 2025-05-31 17:26:37.756847 | 2025-05-31 17:26:37.756951 | PLAY RECAP 2025-05-31 17:26:37.757026 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-05-31 17:26:37.757065 | 2025-05-31 17:26:37.904775 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-31 17:26:37.905875 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-31 17:26:38.677036 | 2025-05-31 17:26:38.677271 | PLAY [Base post] 2025-05-31 17:26:38.695067 | 2025-05-31 17:26:38.695269 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-05-31 17:26:39.683700 | orchestrator | changed 2025-05-31 17:26:39.694002 | 2025-05-31 17:26:39.694122 | PLAY RECAP 2025-05-31 17:26:39.694220 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-05-31 17:26:39.694299 | 2025-05-31 17:26:39.820832 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-31 17:26:39.822983 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-05-31 17:26:40.664792 | 2025-05-31 17:26:40.664981 | PLAY [Base post-logs] 2025-05-31 17:26:40.676781 | 2025-05-31 17:26:40.676976 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-05-31 17:26:41.184783 | localhost | changed 2025-05-31 17:26:41.201506 | 2025-05-31 17:26:41.201711 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-05-31 17:26:41.239324 | localhost | ok 2025-05-31 17:26:41.247712 | 2025-05-31 17:26:41.247838 | TASK [Set zuul-log-path fact] 2025-05-31 17:26:41.268817 | localhost | ok 2025-05-31 17:26:41.288118 | 2025-05-31 17:26:41.288272 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-31 17:26:41.325295 | localhost | ok 2025-05-31 17:26:41.329989 | 2025-05-31 17:26:41.330118 | TASK [upload-logs : Create log directories] 2025-05-31 17:26:41.892060 | localhost | changed 2025-05-31 17:26:41.897290 | 2025-05-31 17:26:41.897454 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-05-31 17:26:42.460892 | localhost -> localhost | ok: Runtime: 0:00:00.006639 2025-05-31 17:26:42.469735 | 2025-05-31 17:26:42.469934 | TASK [upload-logs : Upload logs to log server] 2025-05-31 17:26:43.066818 | localhost | Output suppressed because no_log was given 2025-05-31 17:26:43.069298 | 2025-05-31 17:26:43.069425 | LOOP [upload-logs : Compress console log and json output] 2025-05-31 17:26:43.132515 | localhost | skipping: Conditional result was False 2025-05-31 17:26:43.138117 | localhost | skipping: Conditional result was False 2025-05-31 17:26:43.147569 | 2025-05-31 17:26:43.147714 | LOOP [upload-logs : Upload compressed console log and json output] 2025-05-31 17:26:43.198677 | localhost | skipping: Conditional result was False 2025-05-31 17:26:43.199523 | 2025-05-31 17:26:43.202691 | localhost | skipping: Conditional result was False 2025-05-31 17:26:43.206762 | 2025-05-31 17:26:43.206970 | LOOP [upload-logs : Upload console log and json output]